Jan 29 08:34:02 localhost kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Jan 29 08:34:02 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 29 08:34:02 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 29 08:34:02 localhost kernel: BIOS-provided physical RAM map:
Jan 29 08:34:02 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 29 08:34:02 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 29 08:34:02 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 29 08:34:02 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 29 08:34:02 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 29 08:34:02 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 29 08:34:02 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 29 08:34:02 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 29 08:34:02 localhost kernel: NX (Execute Disable) protection: active
Jan 29 08:34:02 localhost kernel: APIC: Static calls initialized
Jan 29 08:34:02 localhost kernel: SMBIOS 2.8 present.
Jan 29 08:34:02 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 29 08:34:02 localhost kernel: Hypervisor detected: KVM
Jan 29 08:34:02 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 29 08:34:02 localhost kernel: kvm-clock: using sched offset of 5483134191 cycles
Jan 29 08:34:02 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 29 08:34:02 localhost kernel: tsc: Detected 2800.000 MHz processor
Jan 29 08:34:02 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 29 08:34:02 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 29 08:34:02 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 29 08:34:02 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 29 08:34:02 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 29 08:34:02 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 29 08:34:02 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 29 08:34:02 localhost kernel: Using GB pages for direct mapping
Jan 29 08:34:02 localhost kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Jan 29 08:34:02 localhost kernel: ACPI: Early table checksum verification disabled
Jan 29 08:34:02 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 29 08:34:02 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 08:34:02 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 08:34:02 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 08:34:02 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 29 08:34:02 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 08:34:02 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 08:34:02 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 29 08:34:02 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 29 08:34:02 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 29 08:34:02 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 29 08:34:02 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 29 08:34:02 localhost kernel: No NUMA configuration found
Jan 29 08:34:02 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 29 08:34:02 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 29 08:34:02 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 29 08:34:02 localhost kernel: Zone ranges:
Jan 29 08:34:02 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 29 08:34:02 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 29 08:34:02 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 29 08:34:02 localhost kernel:   Device   empty
Jan 29 08:34:02 localhost kernel: Movable zone start for each node
Jan 29 08:34:02 localhost kernel: Early memory node ranges
Jan 29 08:34:02 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 29 08:34:02 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 29 08:34:02 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 29 08:34:02 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 29 08:34:02 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 29 08:34:02 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 29 08:34:02 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 29 08:34:02 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 29 08:34:02 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 29 08:34:02 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 29 08:34:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 29 08:34:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 29 08:34:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 29 08:34:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 29 08:34:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 29 08:34:02 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 29 08:34:02 localhost kernel: TSC deadline timer available
Jan 29 08:34:02 localhost kernel: CPU topo: Max. logical packages:   8
Jan 29 08:34:02 localhost kernel: CPU topo: Max. logical dies:       8
Jan 29 08:34:02 localhost kernel: CPU topo: Max. dies per package:   1
Jan 29 08:34:02 localhost kernel: CPU topo: Max. threads per core:   1
Jan 29 08:34:02 localhost kernel: CPU topo: Num. cores per package:     1
Jan 29 08:34:02 localhost kernel: CPU topo: Num. threads per package:   1
Jan 29 08:34:02 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 29 08:34:02 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 29 08:34:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 29 08:34:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 29 08:34:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 29 08:34:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 29 08:34:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 29 08:34:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 29 08:34:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 29 08:34:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 29 08:34:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 29 08:34:02 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 29 08:34:02 localhost kernel: Booting paravirtualized kernel on KVM
Jan 29 08:34:02 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 29 08:34:02 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 29 08:34:02 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 29 08:34:02 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 29 08:34:02 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 29 08:34:02 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 29 08:34:02 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 29 08:34:02 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Jan 29 08:34:02 localhost kernel: random: crng init done
Jan 29 08:34:02 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 29 08:34:02 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 29 08:34:02 localhost kernel: Fallback order for Node 0: 0 
Jan 29 08:34:02 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 29 08:34:02 localhost kernel: Policy zone: Normal
Jan 29 08:34:02 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 29 08:34:02 localhost kernel: software IO TLB: area num 8.
Jan 29 08:34:02 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 29 08:34:02 localhost kernel: ftrace: allocating 49438 entries in 194 pages
Jan 29 08:34:02 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 29 08:34:02 localhost kernel: Dynamic Preempt: voluntary
Jan 29 08:34:02 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 29 08:34:02 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 29 08:34:02 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 29 08:34:02 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 29 08:34:02 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 29 08:34:02 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 29 08:34:02 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 29 08:34:02 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 29 08:34:02 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 29 08:34:02 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 29 08:34:02 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 29 08:34:02 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 29 08:34:02 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 29 08:34:02 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 29 08:34:02 localhost kernel: Console: colour VGA+ 80x25
Jan 29 08:34:02 localhost kernel: printk: console [ttyS0] enabled
Jan 29 08:34:02 localhost kernel: ACPI: Core revision 20230331
Jan 29 08:34:02 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 29 08:34:02 localhost kernel: x2apic enabled
Jan 29 08:34:02 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 29 08:34:02 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 29 08:34:02 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 29 08:34:02 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 29 08:34:02 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 29 08:34:02 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 29 08:34:02 localhost kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Jan 29 08:34:02 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 29 08:34:02 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 29 08:34:02 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 29 08:34:02 localhost kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Jan 29 08:34:02 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 29 08:34:02 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 29 08:34:02 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 29 08:34:02 localhost kernel: active return thunk: retbleed_return_thunk
Jan 29 08:34:02 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 29 08:34:02 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 29 08:34:02 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 29 08:34:02 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 29 08:34:02 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 29 08:34:02 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 29 08:34:02 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 29 08:34:02 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 29 08:34:02 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 29 08:34:02 localhost kernel: landlock: Up and running.
Jan 29 08:34:02 localhost kernel: Yama: becoming mindful.
Jan 29 08:34:02 localhost kernel: SELinux:  Initializing.
Jan 29 08:34:02 localhost kernel: LSM support for eBPF active
Jan 29 08:34:02 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 29 08:34:02 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 29 08:34:02 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 29 08:34:02 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 29 08:34:02 localhost kernel: ... version:                0
Jan 29 08:34:02 localhost kernel: ... bit width:              48
Jan 29 08:34:02 localhost kernel: ... generic registers:      6
Jan 29 08:34:02 localhost kernel: ... value mask:             0000ffffffffffff
Jan 29 08:34:02 localhost kernel: ... max period:             00007fffffffffff
Jan 29 08:34:02 localhost kernel: ... fixed-purpose events:   0
Jan 29 08:34:02 localhost kernel: ... event mask:             000000000000003f
Jan 29 08:34:02 localhost kernel: signal: max sigframe size: 1776
Jan 29 08:34:02 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 29 08:34:02 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 29 08:34:02 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 29 08:34:02 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 29 08:34:02 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 29 08:34:02 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 29 08:34:02 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 29 08:34:02 localhost kernel: node 0 deferred pages initialised in 11ms
Jan 29 08:34:02 localhost kernel: Memory: 7763848K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618404K reserved, 0K cma-reserved)
Jan 29 08:34:02 localhost kernel: devtmpfs: initialized
Jan 29 08:34:02 localhost kernel: x86/mm: Memory block size: 128MB
Jan 29 08:34:02 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 29 08:34:02 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 29 08:34:02 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 29 08:34:02 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 29 08:34:02 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 29 08:34:02 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 29 08:34:02 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 29 08:34:02 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 29 08:34:02 localhost kernel: audit: type=2000 audit(1769675640.583:1): state=initialized audit_enabled=0 res=1
Jan 29 08:34:02 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 29 08:34:02 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 29 08:34:02 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 29 08:34:02 localhost kernel: cpuidle: using governor menu
Jan 29 08:34:02 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 29 08:34:02 localhost kernel: PCI: Using configuration type 1 for base access
Jan 29 08:34:02 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 29 08:34:02 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 29 08:34:02 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 29 08:34:02 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 29 08:34:02 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 29 08:34:02 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 29 08:34:02 localhost kernel: Demotion targets for Node 0: null
Jan 29 08:34:02 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 29 08:34:02 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 29 08:34:02 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 29 08:34:02 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 29 08:34:02 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 29 08:34:02 localhost kernel: ACPI: Interpreter enabled
Jan 29 08:34:02 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 29 08:34:02 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 29 08:34:02 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 29 08:34:02 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 29 08:34:02 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 29 08:34:02 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 29 08:34:02 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [3] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [4] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [5] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [6] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [7] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [8] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [9] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [10] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [11] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [12] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [13] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [14] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [15] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [16] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [17] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [18] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [19] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [20] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [21] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [22] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [23] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [24] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [25] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [26] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [27] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [28] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [29] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [30] registered
Jan 29 08:34:02 localhost kernel: acpiphp: Slot [31] registered
Jan 29 08:34:02 localhost kernel: PCI host bridge to bus 0000:00
Jan 29 08:34:02 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 29 08:34:02 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 29 08:34:02 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 29 08:34:02 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 29 08:34:02 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 29 08:34:02 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 29 08:34:02 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 29 08:34:02 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 29 08:34:02 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 29 08:34:02 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 29 08:34:02 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 29 08:34:02 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 29 08:34:02 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 29 08:34:02 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 29 08:34:02 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 29 08:34:02 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 29 08:34:02 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 29 08:34:02 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 29 08:34:02 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 29 08:34:02 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 29 08:34:02 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 29 08:34:02 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 29 08:34:02 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 29 08:34:02 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 29 08:34:02 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 29 08:34:02 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 29 08:34:02 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 29 08:34:02 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 29 08:34:02 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 29 08:34:02 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 29 08:34:02 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 29 08:34:02 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 29 08:34:02 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 29 08:34:02 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 29 08:34:02 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 29 08:34:02 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 29 08:34:02 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 29 08:34:02 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 29 08:34:02 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 29 08:34:02 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 29 08:34:02 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 29 08:34:02 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 29 08:34:02 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 29 08:34:02 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 29 08:34:02 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 29 08:34:02 localhost kernel: iommu: Default domain type: Translated
Jan 29 08:34:02 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 29 08:34:02 localhost kernel: SCSI subsystem initialized
Jan 29 08:34:02 localhost kernel: ACPI: bus type USB registered
Jan 29 08:34:02 localhost kernel: usbcore: registered new interface driver usbfs
Jan 29 08:34:02 localhost kernel: usbcore: registered new interface driver hub
Jan 29 08:34:02 localhost kernel: usbcore: registered new device driver usb
Jan 29 08:34:02 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 29 08:34:02 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 29 08:34:02 localhost kernel: PTP clock support registered
Jan 29 08:34:02 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 29 08:34:02 localhost kernel: NetLabel: Initializing
Jan 29 08:34:02 localhost kernel: NetLabel:  domain hash size = 128
Jan 29 08:34:02 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 29 08:34:02 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 29 08:34:02 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 29 08:34:02 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 29 08:34:02 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 29 08:34:02 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 29 08:34:02 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 29 08:34:02 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 29 08:34:02 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 29 08:34:02 localhost kernel: vgaarb: loaded
Jan 29 08:34:02 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 29 08:34:02 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 29 08:34:02 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 29 08:34:02 localhost kernel: pnp: PnP ACPI init
Jan 29 08:34:02 localhost kernel: pnp 00:03: [dma 2]
Jan 29 08:34:02 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 29 08:34:02 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 29 08:34:02 localhost kernel: NET: Registered PF_INET protocol family
Jan 29 08:34:02 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 29 08:34:02 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 29 08:34:02 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 29 08:34:02 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 29 08:34:02 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 29 08:34:02 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 29 08:34:02 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 29 08:34:02 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 29 08:34:02 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 29 08:34:02 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 29 08:34:02 localhost kernel: NET: Registered PF_XDP protocol family
Jan 29 08:34:02 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 29 08:34:02 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 29 08:34:02 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 29 08:34:02 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 29 08:34:02 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 29 08:34:02 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 29 08:34:02 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 29 08:34:02 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 29 08:34:02 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 22041 usecs
Jan 29 08:34:02 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 29 08:34:02 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 29 08:34:02 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 29 08:34:02 localhost kernel: ACPI: bus type thunderbolt registered
Jan 29 08:34:02 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 29 08:34:02 localhost kernel: Initialise system trusted keyrings
Jan 29 08:34:02 localhost kernel: Key type blacklist registered
Jan 29 08:34:02 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 29 08:34:02 localhost kernel: zbud: loaded
Jan 29 08:34:02 localhost kernel: integrity: Platform Keyring initialized
Jan 29 08:34:02 localhost kernel: integrity: Machine keyring initialized
Jan 29 08:34:02 localhost kernel: Freeing initrd memory: 88000K
Jan 29 08:34:02 localhost kernel: NET: Registered PF_ALG protocol family
Jan 29 08:34:02 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 29 08:34:02 localhost kernel: Key type asymmetric registered
Jan 29 08:34:02 localhost kernel: Asymmetric key parser 'x509' registered
Jan 29 08:34:02 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 29 08:34:02 localhost kernel: io scheduler mq-deadline registered
Jan 29 08:34:02 localhost kernel: io scheduler kyber registered
Jan 29 08:34:02 localhost kernel: io scheduler bfq registered
Jan 29 08:34:02 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 29 08:34:02 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 29 08:34:02 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 29 08:34:02 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 29 08:34:02 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 29 08:34:02 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 29 08:34:02 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 29 08:34:02 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 29 08:34:02 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 29 08:34:02 localhost kernel: Non-volatile memory driver v1.3
Jan 29 08:34:02 localhost kernel: rdac: device handler registered
Jan 29 08:34:02 localhost kernel: hp_sw: device handler registered
Jan 29 08:34:02 localhost kernel: emc: device handler registered
Jan 29 08:34:02 localhost kernel: alua: device handler registered
Jan 29 08:34:02 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 29 08:34:02 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 29 08:34:02 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 29 08:34:02 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 29 08:34:02 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 29 08:34:02 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 29 08:34:02 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 29 08:34:02 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Jan 29 08:34:02 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 29 08:34:02 localhost kernel: hub 1-0:1.0: USB hub found
Jan 29 08:34:02 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 29 08:34:02 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 29 08:34:02 localhost kernel: usbserial: USB Serial support registered for generic
Jan 29 08:34:02 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 29 08:34:02 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 29 08:34:02 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 29 08:34:02 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 29 08:34:02 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 29 08:34:02 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 29 08:34:02 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 29 08:34:02 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 29 08:34:02 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 29 08:34:02 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-29T08:34:01 UTC (1769675641)
Jan 29 08:34:02 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 29 08:34:02 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 29 08:34:02 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 29 08:34:02 localhost kernel: usbcore: registered new interface driver usbhid
Jan 29 08:34:02 localhost kernel: usbhid: USB HID core driver
Jan 29 08:34:02 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 29 08:34:02 localhost kernel: Initializing XFRM netlink socket
Jan 29 08:34:02 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 29 08:34:02 localhost kernel: Segment Routing with IPv6
Jan 29 08:34:02 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 29 08:34:02 localhost kernel: mpls_gso: MPLS GSO support
Jan 29 08:34:02 localhost kernel: IPI shorthand broadcast: enabled
Jan 29 08:34:02 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 29 08:34:02 localhost kernel: AES CTR mode by8 optimization enabled
Jan 29 08:34:02 localhost kernel: sched_clock: Marking stable (933001970, 161726200)->(1164455810, -69727640)
Jan 29 08:34:02 localhost kernel: registered taskstats version 1
Jan 29 08:34:02 localhost kernel: Loading compiled-in X.509 certificates
Jan 29 08:34:02 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 29 08:34:02 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 29 08:34:02 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 29 08:34:02 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 29 08:34:02 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 29 08:34:02 localhost kernel: Demotion targets for Node 0: null
Jan 29 08:34:02 localhost kernel: page_owner is disabled
Jan 29 08:34:02 localhost kernel: Key type .fscrypt registered
Jan 29 08:34:02 localhost kernel: Key type fscrypt-provisioning registered
Jan 29 08:34:02 localhost kernel: Key type big_key registered
Jan 29 08:34:02 localhost kernel: Key type encrypted registered
Jan 29 08:34:02 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 29 08:34:02 localhost kernel: Loading compiled-in module X.509 certificates
Jan 29 08:34:02 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 29 08:34:02 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 29 08:34:02 localhost kernel: ima: No architecture policies found
Jan 29 08:34:02 localhost kernel: evm: Initialising EVM extended attributes:
Jan 29 08:34:02 localhost kernel: evm: security.selinux
Jan 29 08:34:02 localhost kernel: evm: security.SMACK64 (disabled)
Jan 29 08:34:02 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 29 08:34:02 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 29 08:34:02 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 29 08:34:02 localhost kernel: evm: security.apparmor (disabled)
Jan 29 08:34:02 localhost kernel: evm: security.ima
Jan 29 08:34:02 localhost kernel: evm: security.capability
Jan 29 08:34:02 localhost kernel: evm: HMAC attrs: 0x1
Jan 29 08:34:02 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 29 08:34:02 localhost kernel: Running certificate verification RSA selftest
Jan 29 08:34:02 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 29 08:34:02 localhost kernel: Running certificate verification ECDSA selftest
Jan 29 08:34:02 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 29 08:34:02 localhost kernel: clk: Disabling unused clocks
Jan 29 08:34:02 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 29 08:34:02 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Jan 29 08:34:02 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 29 08:34:02 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Jan 29 08:34:02 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 29 08:34:02 localhost kernel: Run /init as init process
Jan 29 08:34:02 localhost kernel:   with arguments:
Jan 29 08:34:02 localhost kernel:     /init
Jan 29 08:34:02 localhost kernel:   with environment:
Jan 29 08:34:02 localhost kernel:     HOME=/
Jan 29 08:34:02 localhost kernel:     TERM=linux
Jan 29 08:34:02 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64
Jan 29 08:34:02 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 29 08:34:02 localhost systemd[1]: Detected virtualization kvm.
Jan 29 08:34:02 localhost systemd[1]: Detected architecture x86-64.
Jan 29 08:34:02 localhost systemd[1]: Running in initrd.
Jan 29 08:34:02 localhost systemd[1]: No hostname configured, using default hostname.
Jan 29 08:34:02 localhost systemd[1]: Hostname set to <localhost>.
Jan 29 08:34:02 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 29 08:34:02 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 29 08:34:02 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 29 08:34:02 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 29 08:34:02 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 29 08:34:02 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 29 08:34:02 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 29 08:34:02 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 29 08:34:02 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 29 08:34:02 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 29 08:34:02 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 29 08:34:02 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 29 08:34:02 localhost systemd[1]: Reached target Local File Systems.
Jan 29 08:34:02 localhost systemd[1]: Reached target Path Units.
Jan 29 08:34:02 localhost systemd[1]: Reached target Slice Units.
Jan 29 08:34:02 localhost systemd[1]: Reached target Swaps.
Jan 29 08:34:02 localhost systemd[1]: Reached target Timer Units.
Jan 29 08:34:02 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 29 08:34:02 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 29 08:34:02 localhost systemd[1]: Listening on Journal Socket.
Jan 29 08:34:02 localhost systemd[1]: Listening on udev Control Socket.
Jan 29 08:34:02 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 29 08:34:02 localhost systemd[1]: Reached target Socket Units.
Jan 29 08:34:02 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 29 08:34:02 localhost systemd[1]: Starting Journal Service...
Jan 29 08:34:02 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 29 08:34:02 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 29 08:34:02 localhost systemd[1]: Starting Create System Users...
Jan 29 08:34:02 localhost systemd[1]: Starting Setup Virtual Console...
Jan 29 08:34:02 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 29 08:34:02 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 29 08:34:02 localhost systemd[1]: Finished Create System Users.
Jan 29 08:34:02 localhost systemd-journald[304]: Journal started
Jan 29 08:34:02 localhost systemd-journald[304]: Runtime Journal (/run/log/journal/ff47f49d26ab48e1aa1aaeb921932033) is 8.0M, max 153.6M, 145.6M free.
Jan 29 08:34:02 localhost systemd-sysusers[308]: Creating group 'users' with GID 100.
Jan 29 08:34:02 localhost systemd-sysusers[308]: Creating group 'dbus' with GID 81.
Jan 29 08:34:02 localhost systemd-sysusers[308]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 29 08:34:02 localhost systemd[1]: Started Journal Service.
Jan 29 08:34:02 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 29 08:34:02 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 29 08:34:02 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 29 08:34:02 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 29 08:34:02 localhost systemd[1]: Finished Setup Virtual Console.
Jan 29 08:34:02 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 29 08:34:02 localhost systemd[1]: Starting dracut cmdline hook...
Jan 29 08:34:02 localhost dracut-cmdline[323]: dracut-9 dracut-057-102.git20250818.el9
Jan 29 08:34:02 localhost dracut-cmdline[323]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 29 08:34:02 localhost systemd[1]: Finished dracut cmdline hook.
Jan 29 08:34:02 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 29 08:34:02 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 29 08:34:02 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 29 08:34:02 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 29 08:34:02 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 29 08:34:02 localhost kernel: RPC: Registered udp transport module.
Jan 29 08:34:02 localhost kernel: RPC: Registered tcp transport module.
Jan 29 08:34:02 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 29 08:34:02 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 29 08:34:02 localhost rpc.statd[440]: Version 2.5.4 starting
Jan 29 08:34:02 localhost rpc.statd[440]: Initializing NSM state
Jan 29 08:34:02 localhost rpc.idmapd[445]: Setting log level to 0
Jan 29 08:34:02 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 29 08:34:02 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 29 08:34:02 localhost systemd-udevd[458]: Using default interface naming scheme 'rhel-9.0'.
Jan 29 08:34:02 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 29 08:34:02 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 29 08:34:02 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 29 08:34:02 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 29 08:34:02 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 29 08:34:02 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 29 08:34:02 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 29 08:34:02 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 29 08:34:02 localhost systemd[1]: Reached target Network.
Jan 29 08:34:02 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 29 08:34:02 localhost systemd[1]: Starting dracut initqueue hook...
Jan 29 08:34:02 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 29 08:34:02 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 29 08:34:02 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 29 08:34:02 localhost kernel: libata version 3.00 loaded.
Jan 29 08:34:02 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 29 08:34:02 localhost systemd-udevd[494]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 08:34:02 localhost kernel:  vda: vda1
Jan 29 08:34:02 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 29 08:34:02 localhost kernel: scsi host0: ata_piix
Jan 29 08:34:02 localhost kernel: scsi host1: ata_piix
Jan 29 08:34:02 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 29 08:34:02 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 29 08:34:02 localhost systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 29 08:34:02 localhost systemd[1]: Reached target Initrd Root Device.
Jan 29 08:34:03 localhost kernel: ata1: found unknown device (class 0)
Jan 29 08:34:03 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 29 08:34:03 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 29 08:34:03 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 29 08:34:03 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 29 08:34:03 localhost systemd[1]: Reached target System Initialization.
Jan 29 08:34:03 localhost systemd[1]: Reached target Basic System.
Jan 29 08:34:03 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 29 08:34:03 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 29 08:34:03 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 29 08:34:03 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 29 08:34:03 localhost systemd[1]: Finished dracut initqueue hook.
Jan 29 08:34:03 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 29 08:34:03 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 29 08:34:03 localhost systemd[1]: Reached target Remote File Systems.
Jan 29 08:34:03 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 29 08:34:03 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 29 08:34:03 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Jan 29 08:34:03 localhost systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Jan 29 08:34:03 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 29 08:34:03 localhost systemd[1]: Mounting /sysroot...
Jan 29 08:34:03 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 29 08:34:03 localhost kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Jan 29 08:34:03 localhost kernel: XFS (vda1): Ending clean mount
Jan 29 08:34:03 localhost systemd[1]: Mounted /sysroot.
Jan 29 08:34:03 localhost systemd[1]: Reached target Initrd Root File System.
Jan 29 08:34:03 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 29 08:34:03 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 29 08:34:03 localhost systemd[1]: Reached target Initrd File Systems.
Jan 29 08:34:03 localhost systemd[1]: Reached target Initrd Default Target.
Jan 29 08:34:03 localhost systemd[1]: Starting dracut mount hook...
Jan 29 08:34:03 localhost systemd[1]: Finished dracut mount hook.
Jan 29 08:34:03 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 29 08:34:03 localhost rpc.idmapd[445]: exiting on signal 15
Jan 29 08:34:03 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 29 08:34:03 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 29 08:34:03 localhost systemd[1]: Stopped target Network.
Jan 29 08:34:03 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 29 08:34:03 localhost systemd[1]: Stopped target Timer Units.
Jan 29 08:34:03 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 29 08:34:03 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 29 08:34:03 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 29 08:34:03 localhost systemd[1]: Stopped target Basic System.
Jan 29 08:34:03 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 29 08:34:03 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 29 08:34:03 localhost systemd[1]: Stopped target Path Units.
Jan 29 08:34:03 localhost systemd[1]: Stopped target Remote File Systems.
Jan 29 08:34:03 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 29 08:34:03 localhost systemd[1]: Stopped target Slice Units.
Jan 29 08:34:03 localhost systemd[1]: Stopped target Socket Units.
Jan 29 08:34:03 localhost systemd[1]: Stopped target System Initialization.
Jan 29 08:34:03 localhost systemd[1]: Stopped target Local File Systems.
Jan 29 08:34:03 localhost systemd[1]: Stopped target Swaps.
Jan 29 08:34:03 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Stopped dracut mount hook.
Jan 29 08:34:03 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 29 08:34:03 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 29 08:34:03 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 29 08:34:03 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 29 08:34:03 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 29 08:34:03 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 29 08:34:03 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 29 08:34:03 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 29 08:34:03 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 29 08:34:03 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 29 08:34:03 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 29 08:34:03 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 29 08:34:03 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Closed udev Control Socket.
Jan 29 08:34:03 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Closed udev Kernel Socket.
Jan 29 08:34:03 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 29 08:34:03 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 29 08:34:03 localhost systemd[1]: Starting Cleanup udev Database...
Jan 29 08:34:03 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 29 08:34:03 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 29 08:34:03 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Stopped Create System Users.
Jan 29 08:34:03 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 29 08:34:03 localhost systemd[1]: Finished Cleanup udev Database.
Jan 29 08:34:03 localhost systemd[1]: Reached target Switch Root.
Jan 29 08:34:03 localhost systemd[1]: Starting Switch Root...
Jan 29 08:34:03 localhost systemd[1]: Switching root.
Jan 29 08:34:03 localhost systemd-journald[304]: Journal stopped
Jan 29 08:34:04 localhost systemd-journald[304]: Received SIGTERM from PID 1 (systemd).
Jan 29 08:34:04 localhost kernel: audit: type=1404 audit(1769675644.127:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 29 08:34:04 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 29 08:34:04 localhost kernel: SELinux:  policy capability open_perms=1
Jan 29 08:34:04 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 29 08:34:04 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 29 08:34:04 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 29 08:34:04 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 29 08:34:04 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 29 08:34:04 localhost kernel: audit: type=1403 audit(1769675644.234:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 29 08:34:04 localhost systemd[1]: Successfully loaded SELinux policy in 111.299ms.
Jan 29 08:34:04 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.688ms.
Jan 29 08:34:04 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 29 08:34:04 localhost systemd[1]: Detected virtualization kvm.
Jan 29 08:34:04 localhost systemd[1]: Detected architecture x86-64.
Jan 29 08:34:04 localhost systemd-rc-local-generator[634]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 08:34:04 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 29 08:34:04 localhost systemd[1]: Stopped Switch Root.
Jan 29 08:34:04 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 29 08:34:04 localhost systemd[1]: Created slice Slice /system/getty.
Jan 29 08:34:04 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 29 08:34:04 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 29 08:34:04 localhost systemd[1]: Created slice User and Session Slice.
Jan 29 08:34:04 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 29 08:34:04 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 29 08:34:04 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 29 08:34:04 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 29 08:34:04 localhost systemd[1]: Stopped target Switch Root.
Jan 29 08:34:04 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 29 08:34:04 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 29 08:34:04 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 29 08:34:04 localhost systemd[1]: Reached target Path Units.
Jan 29 08:34:04 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 29 08:34:04 localhost systemd[1]: Reached target Slice Units.
Jan 29 08:34:04 localhost systemd[1]: Reached target Swaps.
Jan 29 08:34:04 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 29 08:34:04 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 29 08:34:04 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 29 08:34:04 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 29 08:34:04 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 29 08:34:04 localhost systemd[1]: Listening on udev Control Socket.
Jan 29 08:34:04 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 29 08:34:04 localhost systemd[1]: Mounting Huge Pages File System...
Jan 29 08:34:04 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 29 08:34:04 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 29 08:34:04 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 29 08:34:04 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 29 08:34:04 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 29 08:34:04 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 29 08:34:04 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 29 08:34:04 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 29 08:34:04 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 29 08:34:04 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 29 08:34:04 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 29 08:34:04 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 29 08:34:04 localhost systemd[1]: Stopped Journal Service.
Jan 29 08:34:04 localhost systemd[1]: Starting Journal Service...
Jan 29 08:34:04 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 29 08:34:04 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 29 08:34:04 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 29 08:34:04 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 29 08:34:04 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 29 08:34:04 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 29 08:34:04 localhost systemd-journald[675]: Journal started
Jan 29 08:34:04 localhost systemd-journald[675]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 29 08:34:04 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 29 08:34:04 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 29 08:34:04 localhost kernel: fuse: init (API version 7.37)
Jan 29 08:34:04 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 29 08:34:04 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 29 08:34:04 localhost systemd[1]: Started Journal Service.
Jan 29 08:34:04 localhost systemd[1]: Mounted Huge Pages File System.
Jan 29 08:34:04 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 29 08:34:04 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 29 08:34:04 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 29 08:34:04 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 29 08:34:04 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 29 08:34:04 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 29 08:34:04 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 29 08:34:04 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 29 08:34:04 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 29 08:34:04 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 29 08:34:04 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 29 08:34:04 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 29 08:34:04 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 29 08:34:04 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 29 08:34:04 localhost kernel: ACPI: bus type drm_connector registered
Jan 29 08:34:04 localhost systemd[1]: Mounting FUSE Control File System...
Jan 29 08:34:04 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 29 08:34:04 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 29 08:34:04 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 29 08:34:04 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 29 08:34:04 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 29 08:34:04 localhost systemd[1]: Starting Create System Users...
Jan 29 08:34:04 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 29 08:34:04 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 29 08:34:04 localhost systemd-journald[675]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 29 08:34:04 localhost systemd-journald[675]: Received client request to flush runtime journal.
Jan 29 08:34:04 localhost systemd[1]: Mounted FUSE Control File System.
Jan 29 08:34:04 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 29 08:34:04 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 29 08:34:04 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 29 08:34:04 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 29 08:34:04 localhost systemd[1]: Finished Create System Users.
Jan 29 08:34:04 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 29 08:34:04 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 29 08:34:04 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 29 08:34:04 localhost systemd[1]: Reached target Local File Systems.
Jan 29 08:34:04 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 29 08:34:04 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 29 08:34:04 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 29 08:34:04 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 29 08:34:04 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 29 08:34:04 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 29 08:34:04 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 29 08:34:04 localhost bootctl[692]: Couldn't find EFI system partition, skipping.
Jan 29 08:34:05 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 29 08:34:05 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 29 08:34:05 localhost systemd[1]: Starting Security Auditing Service...
Jan 29 08:34:05 localhost systemd[1]: Starting RPC Bind...
Jan 29 08:34:05 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 29 08:34:05 localhost auditd[698]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 29 08:34:05 localhost auditd[698]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 29 08:34:05 localhost systemd[1]: Started RPC Bind.
Jan 29 08:34:05 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 29 08:34:05 localhost augenrules[703]: /sbin/augenrules: No change
Jan 29 08:34:05 localhost augenrules[718]: No rules
Jan 29 08:34:05 localhost augenrules[718]: enabled 1
Jan 29 08:34:05 localhost augenrules[718]: failure 1
Jan 29 08:34:05 localhost augenrules[718]: pid 698
Jan 29 08:34:05 localhost augenrules[718]: rate_limit 0
Jan 29 08:34:05 localhost augenrules[718]: backlog_limit 8192
Jan 29 08:34:05 localhost augenrules[718]: lost 0
Jan 29 08:34:05 localhost augenrules[718]: backlog 3
Jan 29 08:34:05 localhost augenrules[718]: backlog_wait_time 60000
Jan 29 08:34:05 localhost augenrules[718]: backlog_wait_time_actual 0
Jan 29 08:34:05 localhost augenrules[718]: enabled 1
Jan 29 08:34:05 localhost augenrules[718]: failure 1
Jan 29 08:34:05 localhost augenrules[718]: pid 698
Jan 29 08:34:05 localhost augenrules[718]: rate_limit 0
Jan 29 08:34:05 localhost augenrules[718]: backlog_limit 8192
Jan 29 08:34:05 localhost augenrules[718]: lost 0
Jan 29 08:34:05 localhost augenrules[718]: backlog 2
Jan 29 08:34:05 localhost augenrules[718]: backlog_wait_time 60000
Jan 29 08:34:05 localhost augenrules[718]: backlog_wait_time_actual 0
Jan 29 08:34:05 localhost augenrules[718]: enabled 1
Jan 29 08:34:05 localhost augenrules[718]: failure 1
Jan 29 08:34:05 localhost augenrules[718]: pid 698
Jan 29 08:34:05 localhost augenrules[718]: rate_limit 0
Jan 29 08:34:05 localhost augenrules[718]: backlog_limit 8192
Jan 29 08:34:05 localhost augenrules[718]: lost 0
Jan 29 08:34:05 localhost augenrules[718]: backlog 1
Jan 29 08:34:05 localhost augenrules[718]: backlog_wait_time 60000
Jan 29 08:34:05 localhost augenrules[718]: backlog_wait_time_actual 0
Jan 29 08:34:05 localhost systemd[1]: Started Security Auditing Service.
Jan 29 08:34:05 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 29 08:34:05 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 29 08:34:05 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 29 08:34:05 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 29 08:34:05 localhost systemd-udevd[726]: Using default interface naming scheme 'rhel-9.0'.
Jan 29 08:34:05 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 29 08:34:05 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 29 08:34:05 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 29 08:34:05 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 29 08:34:05 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 29 08:34:05 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 29 08:34:05 localhost systemd[1]: Starting Update is Completed...
Jan 29 08:34:05 localhost systemd-udevd[745]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 08:34:05 localhost systemd[1]: Finished Update is Completed.
Jan 29 08:34:05 localhost systemd[1]: Reached target System Initialization.
Jan 29 08:34:05 localhost systemd[1]: Started dnf makecache --timer.
Jan 29 08:34:05 localhost systemd[1]: Started Daily rotation of log files.
Jan 29 08:34:05 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 29 08:34:05 localhost systemd[1]: Reached target Timer Units.
Jan 29 08:34:05 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 29 08:34:05 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 29 08:34:05 localhost systemd[1]: Reached target Socket Units.
Jan 29 08:34:05 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 29 08:34:05 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 29 08:34:05 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 29 08:34:05 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 29 08:34:05 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 29 08:34:05 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 29 08:34:05 localhost dbus-broker-lau[769]: Ready
Jan 29 08:34:05 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 29 08:34:05 localhost systemd[1]: Reached target Basic System.
Jan 29 08:34:05 localhost systemd[1]: Starting NTP client/server...
Jan 29 08:34:05 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 29 08:34:05 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 29 08:34:05 localhost kernel: kvm_amd: TSC scaling supported
Jan 29 08:34:05 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 29 08:34:05 localhost kernel: kvm_amd: Nested Paging enabled
Jan 29 08:34:05 localhost kernel: kvm_amd: LBR virtualization supported
Jan 29 08:34:05 localhost kernel: Console: switching to colour dummy device 80x25
Jan 29 08:34:05 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 29 08:34:05 localhost kernel: [drm] features: -context_init
Jan 29 08:34:05 localhost kernel: [drm] number of scanouts: 1
Jan 29 08:34:05 localhost kernel: [drm] number of cap sets: 0
Jan 29 08:34:05 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 29 08:34:05 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 29 08:34:05 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 29 08:34:05 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 29 08:34:05 localhost chronyd[786]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 29 08:34:05 localhost chronyd[786]: Loaded 0 symmetric keys
Jan 29 08:34:05 localhost chronyd[786]: Using right/UTC timezone to obtain leap second data
Jan 29 08:34:05 localhost chronyd[786]: Loaded seccomp filter (level 2)
Jan 29 08:34:05 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 29 08:34:05 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 29 08:34:05 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 29 08:34:05 localhost systemd[1]: Started irqbalance daemon.
Jan 29 08:34:05 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 29 08:34:05 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 29 08:34:05 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 29 08:34:05 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 29 08:34:05 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 29 08:34:05 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 29 08:34:05 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 29 08:34:05 localhost systemd[1]: Starting User Login Management...
Jan 29 08:34:05 localhost systemd[1]: Started NTP client/server.
Jan 29 08:34:05 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 29 08:34:05 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 29 08:34:05 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 29 08:34:05 localhost systemd-logind[799]: New seat seat0.
Jan 29 08:34:05 localhost systemd-logind[799]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 29 08:34:05 localhost systemd-logind[799]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 29 08:34:05 localhost systemd[1]: Started User Login Management.
Jan 29 08:34:05 localhost iptables.init[793]: iptables: Applying firewall rules: [  OK  ]
Jan 29 08:34:05 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 29 08:34:06 localhost cloud-init[834]: Cloud-init v. 24.4-8.el9 running 'init-local' at Thu, 29 Jan 2026 08:34:06 +0000. Up 5.86 seconds.
Jan 29 08:34:06 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 29 08:34:06 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 29 08:34:06 localhost systemd[1]: run-cloud\x2dinit-tmp-tmplpqqdx78.mount: Deactivated successfully.
Jan 29 08:34:06 localhost systemd[1]: Starting Hostname Service...
Jan 29 08:34:06 localhost systemd[1]: Started Hostname Service.
Jan 29 08:34:06 np0005600302.novalocal systemd-hostnamed[848]: Hostname set to <np0005600302.novalocal> (static)
Jan 29 08:34:06 np0005600302.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 29 08:34:06 np0005600302.novalocal systemd[1]: Reached target Preparation for Network.
Jan 29 08:34:06 np0005600302.novalocal systemd[1]: Starting Network Manager...
Jan 29 08:34:06 np0005600302.novalocal NetworkManager[852]: <info>  [1769675646.9782] NetworkManager (version 1.54.3-2.el9) is starting... (boot:50915982-a81b-4e96-99dd-0c758f60a157)
Jan 29 08:34:06 np0005600302.novalocal NetworkManager[852]: <info>  [1769675646.9794] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 29 08:34:06 np0005600302.novalocal NetworkManager[852]: <info>  [1769675646.9938] manager[0x5628df9a6000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 29 08:34:06 np0005600302.novalocal NetworkManager[852]: <info>  [1769675646.9975] hostname: hostname: using hostnamed
Jan 29 08:34:06 np0005600302.novalocal NetworkManager[852]: <info>  [1769675646.9975] hostname: static hostname changed from (none) to "np0005600302.novalocal"
Jan 29 08:34:06 np0005600302.novalocal NetworkManager[852]: <info>  [1769675646.9979] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0119] manager[0x5628df9a6000]: rfkill: Wi-Fi hardware radio set enabled
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0120] manager[0x5628df9a6000]: rfkill: WWAN hardware radio set enabled
Jan 29 08:34:07 np0005600302.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0213] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0213] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0214] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0215] manager: Networking is enabled by state file
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0217] settings: Loaded settings plugin: keyfile (internal)
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0257] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0277] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0298] dhcp: init: Using DHCP client 'internal'
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0302] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0316] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0328] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0341] device (lo): Activation: starting connection 'lo' (3aa75622-136a-4c98-b79e-e9eaf4c8f693)
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0349] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0353] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 08:34:07 np0005600302.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0385] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 29 08:34:07 np0005600302.novalocal systemd[1]: Started Network Manager.
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0390] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0393] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0395] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0397] device (eth0): carrier: link connected
Jan 29 08:34:07 np0005600302.novalocal systemd[1]: Reached target Network.
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0402] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0410] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0418] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0422] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0423] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0426] manager: NetworkManager state is now CONNECTING
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0428] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 08:34:07 np0005600302.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0441] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0445] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 29 08:34:07 np0005600302.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 29 08:34:07 np0005600302.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 29 08:34:07 np0005600302.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0663] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0679] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.0683] device (lo): Activation: successful, device activated.
Jan 29 08:34:07 np0005600302.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 29 08:34:07 np0005600302.novalocal systemd[1]: Reached target NFS client services.
Jan 29 08:34:07 np0005600302.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 29 08:34:07 np0005600302.novalocal systemd[1]: Reached target Remote File Systems.
Jan 29 08:34:07 np0005600302.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.5223] dhcp4 (eth0): state changed new lease, address=38.102.83.196
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.5234] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.5256] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.5279] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.5280] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.5283] manager: NetworkManager state is now CONNECTED_SITE
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.5287] device (eth0): Activation: successful, device activated.
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.5292] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 29 08:34:07 np0005600302.novalocal NetworkManager[852]: <info>  [1769675647.5295] manager: startup complete
Jan 29 08:34:07 np0005600302.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 29 08:34:07 np0005600302.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: Cloud-init v. 24.4-8.el9 running 'init' at Thu, 29 Jan 2026 08:34:07 +0000. Up 7.22 seconds.
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: |  eth0  | True |        38.102.83.196         | 255.255.255.0 | global | fa:16:3e:c8:2e:c9 |
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: |  eth0  | True | fe80::f816:3eff:fec8:2ec9/64 |       .       |  link  | fa:16:3e:c8:2e:c9 |
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 29 08:34:07 np0005600302.novalocal cloud-init[915]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 29 08:34:08 np0005600302.novalocal useradd[981]: new group: name=cloud-user, GID=1001
Jan 29 08:34:08 np0005600302.novalocal useradd[981]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 29 08:34:08 np0005600302.novalocal useradd[981]: add 'cloud-user' to group 'adm'
Jan 29 08:34:08 np0005600302.novalocal useradd[981]: add 'cloud-user' to group 'systemd-journal'
Jan 29 08:34:08 np0005600302.novalocal useradd[981]: add 'cloud-user' to shadow group 'adm'
Jan 29 08:34:08 np0005600302.novalocal useradd[981]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: Generating public/private rsa key pair.
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: The key fingerprint is:
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: SHA256:fzwHo+HCcL3qAr0LSk3+wq3WDQf1Ygs9lzgysO/k0ZE root@np0005600302.novalocal
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: The key's randomart image is:
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: +---[RSA 3072]----+
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: |                 |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: |    .   .        |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: |     o o + .     |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: |    . = E.+      |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: |    .o.BSBo o    |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: |   +. *++o = o   |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: |  ..+B *o = + .  |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: | . .+oB .o . o   |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: |  ...oo+o        |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: +----[SHA256]-----+
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: Generating public/private ecdsa key pair.
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: The key fingerprint is:
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: SHA256:ybwM46Qz+cT5/Sxra6gPGF2Jp7LClyzJN3f5iGB+BoA root@np0005600302.novalocal
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: The key's randomart image is:
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: +---[ECDSA 256]---+
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: |                 |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: |       . .       |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: | .    . +        |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: |E .  . * .       |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: |   .o * S        |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: | o o.% = o       |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: |  * /.B =.       |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: |   B Oo=.++.     |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: |    .o+o+o==o    |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: +----[SHA256]-----+
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: Generating public/private ed25519 key pair.
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: The key fingerprint is:
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: SHA256:BRQ2j3SYVY5SuQPwjojlY+pENNc1084dUB/YEmRf+Rs root@np0005600302.novalocal
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: The key's randomart image is:
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: +--[ED25519 256]--+
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: |      .o@*=**+. o|
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: |     . =+Xo++o.+ |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: |  o o . +++o.oo .|
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: | . * . o o= .  E.|
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: |  o = . S  .    o|
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: | . o .         . |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: |  o              |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: | o               |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: |  .              |
Jan 29 08:34:09 np0005600302.novalocal cloud-init[915]: +----[SHA256]-----+
Jan 29 08:34:09 np0005600302.novalocal sm-notify[997]: Version 2.5.4 starting
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Reached target Network is Online.
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Starting System Logging Service...
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Starting Permit User Sessions...
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 29 08:34:09 np0005600302.novalocal sshd[999]: Server listening on 0.0.0.0 port 22.
Jan 29 08:34:09 np0005600302.novalocal sshd[999]: Server listening on :: port 22.
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Finished Permit User Sessions.
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Started Command Scheduler.
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Started Getty on tty1.
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Reached target Login Prompts.
Jan 29 08:34:09 np0005600302.novalocal crond[1003]: (CRON) STARTUP (1.5.7)
Jan 29 08:34:09 np0005600302.novalocal crond[1003]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 29 08:34:09 np0005600302.novalocal crond[1003]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 74% if used.)
Jan 29 08:34:09 np0005600302.novalocal crond[1003]: (CRON) INFO (running with inotify support)
Jan 29 08:34:09 np0005600302.novalocal rsyslogd[998]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="998" x-info="https://www.rsyslog.com"] start
Jan 29 08:34:09 np0005600302.novalocal rsyslogd[998]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Started System Logging Service.
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Reached target Multi-User System.
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 29 08:34:09 np0005600302.novalocal rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 08:34:09 np0005600302.novalocal kdumpctl[1008]: kdump: No kdump initial ramdisk found.
Jan 29 08:34:09 np0005600302.novalocal kdumpctl[1008]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Jan 29 08:34:09 np0005600302.novalocal cloud-init[1097]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Thu, 29 Jan 2026 08:34:09 +0000. Up 8.91 seconds.
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 29 08:34:09 np0005600302.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 29 08:34:09 np0005600302.novalocal dracut[1259]: dracut-057-102.git20250818.el9
Jan 29 08:34:09 np0005600302.novalocal cloud-init[1277]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Thu, 29 Jan 2026 08:34:09 +0000. Up 9.31 seconds.
Jan 29 08:34:09 np0005600302.novalocal dracut[1261]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Jan 29 08:34:10 np0005600302.novalocal cloud-init[1306]: #############################################################
Jan 29 08:34:10 np0005600302.novalocal cloud-init[1312]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 29 08:34:10 np0005600302.novalocal cloud-init[1320]: 256 SHA256:ybwM46Qz+cT5/Sxra6gPGF2Jp7LClyzJN3f5iGB+BoA root@np0005600302.novalocal (ECDSA)
Jan 29 08:34:10 np0005600302.novalocal cloud-init[1329]: 256 SHA256:BRQ2j3SYVY5SuQPwjojlY+pENNc1084dUB/YEmRf+Rs root@np0005600302.novalocal (ED25519)
Jan 29 08:34:10 np0005600302.novalocal cloud-init[1338]: 3072 SHA256:fzwHo+HCcL3qAr0LSk3+wq3WDQf1Ygs9lzgysO/k0ZE root@np0005600302.novalocal (RSA)
Jan 29 08:34:10 np0005600302.novalocal cloud-init[1339]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 29 08:34:10 np0005600302.novalocal cloud-init[1340]: #############################################################
Jan 29 08:34:10 np0005600302.novalocal cloud-init[1277]: Cloud-init v. 24.4-8.el9 finished at Thu, 29 Jan 2026 08:34:10 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.49 seconds
Jan 29 08:34:10 np0005600302.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 29 08:34:10 np0005600302.novalocal systemd[1]: Reached target Cloud-init target.
Jan 29 08:34:10 np0005600302.novalocal sshd-session[1359]: Connection reset by 38.102.83.114 port 45520 [preauth]
Jan 29 08:34:10 np0005600302.novalocal sshd-session[1367]: Unable to negotiate with 38.102.83.114 port 44762: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 29 08:34:10 np0005600302.novalocal sshd-session[1377]: Unable to negotiate with 38.102.83.114 port 44778: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 29 08:34:10 np0005600302.novalocal sshd-session[1382]: Unable to negotiate with 38.102.83.114 port 44780: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 29 08:34:10 np0005600302.novalocal sshd-session[1387]: Connection reset by 38.102.83.114 port 44788 [preauth]
Jan 29 08:34:10 np0005600302.novalocal sshd-session[1392]: Connection reset by 38.102.83.114 port 44794 [preauth]
Jan 29 08:34:10 np0005600302.novalocal sshd-session[1400]: Unable to negotiate with 38.102.83.114 port 44796: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 29 08:34:10 np0005600302.novalocal sshd-session[1372]: Connection closed by 38.102.83.114 port 44764 [preauth]
Jan 29 08:34:10 np0005600302.novalocal sshd-session[1405]: Unable to negotiate with 38.102.83.114 port 44806: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 29 08:34:10 np0005600302.novalocal dracut[1261]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: memstrack is not available
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: memstrack is not available
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: *** Including module: systemd ***
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: *** Including module: fips ***
Jan 29 08:34:11 np0005600302.novalocal dracut[1261]: *** Including module: systemd-initrd ***
Jan 29 08:34:12 np0005600302.novalocal dracut[1261]: *** Including module: i18n ***
Jan 29 08:34:12 np0005600302.novalocal dracut[1261]: *** Including module: drm ***
Jan 29 08:34:12 np0005600302.novalocal dracut[1261]: *** Including module: prefixdevname ***
Jan 29 08:34:12 np0005600302.novalocal dracut[1261]: *** Including module: kernel-modules ***
Jan 29 08:34:12 np0005600302.novalocal chronyd[786]: Selected source 162.159.200.1 (2.centos.pool.ntp.org)
Jan 29 08:34:12 np0005600302.novalocal chronyd[786]: System clock TAI offset set to 37 seconds
Jan 29 08:34:12 np0005600302.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 29 08:34:12 np0005600302.novalocal dracut[1261]: *** Including module: kernel-modules-extra ***
Jan 29 08:34:12 np0005600302.novalocal dracut[1261]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 29 08:34:12 np0005600302.novalocal dracut[1261]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 29 08:34:12 np0005600302.novalocal dracut[1261]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 29 08:34:12 np0005600302.novalocal dracut[1261]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 29 08:34:12 np0005600302.novalocal dracut[1261]: *** Including module: qemu ***
Jan 29 08:34:12 np0005600302.novalocal dracut[1261]: *** Including module: fstab-sys ***
Jan 29 08:34:12 np0005600302.novalocal dracut[1261]: *** Including module: rootfs-block ***
Jan 29 08:34:12 np0005600302.novalocal dracut[1261]: *** Including module: terminfo ***
Jan 29 08:34:13 np0005600302.novalocal dracut[1261]: *** Including module: udev-rules ***
Jan 29 08:34:13 np0005600302.novalocal dracut[1261]: Skipping udev rule: 91-permissions.rules
Jan 29 08:34:13 np0005600302.novalocal dracut[1261]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 29 08:34:13 np0005600302.novalocal dracut[1261]: *** Including module: virtiofs ***
Jan 29 08:34:13 np0005600302.novalocal dracut[1261]: *** Including module: dracut-systemd ***
Jan 29 08:34:13 np0005600302.novalocal dracut[1261]: *** Including module: usrmount ***
Jan 29 08:34:13 np0005600302.novalocal dracut[1261]: *** Including module: base ***
Jan 29 08:34:13 np0005600302.novalocal dracut[1261]: *** Including module: fs-lib ***
Jan 29 08:34:13 np0005600302.novalocal dracut[1261]: *** Including module: kdumpbase ***
Jan 29 08:34:14 np0005600302.novalocal chronyd[786]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:   microcode_ctl module: mangling fw_dir
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: configuration "intel" is ignored
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]: *** Including module: openssl ***
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]: *** Including module: shutdown ***
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]: *** Including module: squash ***
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]: *** Including modules done ***
Jan 29 08:34:14 np0005600302.novalocal dracut[1261]: *** Installing kernel module dependencies ***
Jan 29 08:34:15 np0005600302.novalocal dracut[1261]: *** Installing kernel module dependencies done ***
Jan 29 08:34:15 np0005600302.novalocal dracut[1261]: *** Resolving executable dependencies ***
Jan 29 08:34:16 np0005600302.novalocal dracut[1261]: *** Resolving executable dependencies done ***
Jan 29 08:34:16 np0005600302.novalocal dracut[1261]: *** Generating early-microcode cpio image ***
Jan 29 08:34:16 np0005600302.novalocal dracut[1261]: *** Store current command line parameters ***
Jan 29 08:34:16 np0005600302.novalocal dracut[1261]: Stored kernel commandline:
Jan 29 08:34:16 np0005600302.novalocal dracut[1261]: No dracut internal kernel commandline stored in the initramfs
Jan 29 08:34:16 np0005600302.novalocal dracut[1261]: *** Install squash loader ***
Jan 29 08:34:16 np0005600302.novalocal irqbalance[794]: Cannot change IRQ 35 affinity: Operation not permitted
Jan 29 08:34:16 np0005600302.novalocal irqbalance[794]: IRQ 35 affinity is now unmanaged
Jan 29 08:34:16 np0005600302.novalocal irqbalance[794]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 29 08:34:16 np0005600302.novalocal irqbalance[794]: IRQ 25 affinity is now unmanaged
Jan 29 08:34:16 np0005600302.novalocal irqbalance[794]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 29 08:34:16 np0005600302.novalocal irqbalance[794]: IRQ 31 affinity is now unmanaged
Jan 29 08:34:16 np0005600302.novalocal irqbalance[794]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 29 08:34:16 np0005600302.novalocal irqbalance[794]: IRQ 26 affinity is now unmanaged
Jan 29 08:34:16 np0005600302.novalocal irqbalance[794]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 29 08:34:16 np0005600302.novalocal irqbalance[794]: IRQ 29 affinity is now unmanaged
Jan 29 08:34:16 np0005600302.novalocal dracut[1261]: *** Squashing the files inside the initramfs ***
Jan 29 08:34:17 np0005600302.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 29 08:34:18 np0005600302.novalocal dracut[1261]: *** Squashing the files inside the initramfs done ***
Jan 29 08:34:18 np0005600302.novalocal dracut[1261]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Jan 29 08:34:18 np0005600302.novalocal dracut[1261]: *** Hardlinking files ***
Jan 29 08:34:18 np0005600302.novalocal dracut[1261]: Mode:           real
Jan 29 08:34:18 np0005600302.novalocal dracut[1261]: Files:          50
Jan 29 08:34:18 np0005600302.novalocal dracut[1261]: Linked:         0 files
Jan 29 08:34:18 np0005600302.novalocal dracut[1261]: Compared:       0 xattrs
Jan 29 08:34:18 np0005600302.novalocal dracut[1261]: Compared:       0 files
Jan 29 08:34:18 np0005600302.novalocal dracut[1261]: Saved:          0 B
Jan 29 08:34:18 np0005600302.novalocal dracut[1261]: Duration:       0.000377 seconds
Jan 29 08:34:18 np0005600302.novalocal dracut[1261]: *** Hardlinking files done ***
Jan 29 08:34:18 np0005600302.novalocal dracut[1261]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Jan 29 08:34:19 np0005600302.novalocal kdumpctl[1008]: kdump: kexec: loaded kdump kernel
Jan 29 08:34:19 np0005600302.novalocal kdumpctl[1008]: kdump: Starting kdump: [OK]
Jan 29 08:34:19 np0005600302.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 29 08:34:19 np0005600302.novalocal systemd[1]: Startup finished in 1.244s (kernel) + 2.277s (initrd) + 14.936s (userspace) = 18.457s.
Jan 29 08:34:37 np0005600302.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 29 08:34:43 np0005600302.novalocal sshd-session[4299]: Accepted publickey for zuul from 38.102.83.114 port 55396 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 29 08:34:43 np0005600302.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 29 08:34:43 np0005600302.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 29 08:34:43 np0005600302.novalocal systemd-logind[799]: New session 1 of user zuul.
Jan 29 08:34:43 np0005600302.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 29 08:34:43 np0005600302.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 29 08:34:43 np0005600302.novalocal systemd[4303]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 08:34:43 np0005600302.novalocal systemd[4303]: Queued start job for default target Main User Target.
Jan 29 08:34:43 np0005600302.novalocal systemd[4303]: Created slice User Application Slice.
Jan 29 08:34:43 np0005600302.novalocal systemd[4303]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 29 08:34:43 np0005600302.novalocal systemd[4303]: Started Daily Cleanup of User's Temporary Directories.
Jan 29 08:34:43 np0005600302.novalocal systemd[4303]: Reached target Paths.
Jan 29 08:34:43 np0005600302.novalocal systemd[4303]: Reached target Timers.
Jan 29 08:34:43 np0005600302.novalocal systemd[4303]: Starting D-Bus User Message Bus Socket...
Jan 29 08:34:43 np0005600302.novalocal systemd[4303]: Starting Create User's Volatile Files and Directories...
Jan 29 08:34:43 np0005600302.novalocal systemd[4303]: Finished Create User's Volatile Files and Directories.
Jan 29 08:34:43 np0005600302.novalocal systemd[4303]: Listening on D-Bus User Message Bus Socket.
Jan 29 08:34:43 np0005600302.novalocal systemd[4303]: Reached target Sockets.
Jan 29 08:34:43 np0005600302.novalocal systemd[4303]: Reached target Basic System.
Jan 29 08:34:43 np0005600302.novalocal systemd[4303]: Reached target Main User Target.
Jan 29 08:34:43 np0005600302.novalocal systemd[4303]: Startup finished in 125ms.
Jan 29 08:34:43 np0005600302.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 29 08:34:43 np0005600302.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 29 08:34:43 np0005600302.novalocal sshd-session[4299]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 08:34:44 np0005600302.novalocal python3[4385]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 08:34:47 np0005600302.novalocal python3[4413]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 08:34:52 np0005600302.novalocal python3[4471]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 08:34:53 np0005600302.novalocal python3[4511]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 29 08:34:55 np0005600302.novalocal python3[4537]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUVvs6nSpYyg30mc7kpTuwSpjA/pSiFAdUa/+iBdvhk2vxIXlkTB5M5PQjkV6lP6LWktqETl+OlkazSphyfnU10U9gCxrTqqSXlQQRjAyPb+Qnbods209BVbVFVUUPd2MS7oWrBEXsvnqrpTMeNhQ1Uge3FeNU+O8Ua/C+cfY7LH087wPgDdzLM6fNiT0KcESjN34TjUPUyLugqcggkOkFBmMJGWvDxlXO0w0B78afjQ8NFD2N0hk8YJSRvW6eyZC18ulpNqYDyqcE52olRm4ICwkDaz8Ri5sTyUqp+BupyciagFq59OrhuP8O2YJFuUYJrcpu5rSFiKxVQUo3gfQTySEyVMA6wqFsSkN9glsimh59FotnmLlg4cPUT0j5BJHCD/6DftgxmQmhB39gH7/GaK8ysD/b1rJwiBy14I/pRBgdkQwA+6BhJLTy1wy39tXkxyngCrmzptVZbq+5vGyF0tUzUH08CJR3bpE7HLJBs+KAbUbetb1Gt03sWOnDL1s= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:34:56 np0005600302.novalocal python3[4561]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:34:56 np0005600302.novalocal python3[4660]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 08:34:56 np0005600302.novalocal python3[4731]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769675696.3080955-207-190633986026913/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=cb659b01dcf04205bbfe93b11e1414c6_id_rsa follow=False checksum=612a2207f91427f32002e9a840663eb46516315f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:34:57 np0005600302.novalocal python3[4855]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 08:34:57 np0005600302.novalocal python3[4926]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769675697.228623-240-43032877986297/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=cb659b01dcf04205bbfe93b11e1414c6_id_rsa.pub follow=False checksum=19ae4e76a01be3ebdd1b1568568d8cd9ca82c562 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:34:59 np0005600302.novalocal python3[4974]: ansible-ping Invoked with data=pong
Jan 29 08:35:00 np0005600302.novalocal python3[4998]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 08:35:01 np0005600302.novalocal python3[5056]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 29 08:35:02 np0005600302.novalocal python3[5088]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:35:02 np0005600302.novalocal python3[5112]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:35:03 np0005600302.novalocal python3[5136]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:35:03 np0005600302.novalocal python3[5160]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:35:03 np0005600302.novalocal python3[5184]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:35:03 np0005600302.novalocal python3[5208]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:35:05 np0005600302.novalocal sudo[5232]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltwbffbndudvedlujntcxutfolxeirpo ; /usr/bin/python3'
Jan 29 08:35:05 np0005600302.novalocal sudo[5232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:35:05 np0005600302.novalocal python3[5234]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:35:05 np0005600302.novalocal sudo[5232]: pam_unix(sudo:session): session closed for user root
Jan 29 08:35:06 np0005600302.novalocal sudo[5310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynqhilajxkwexvugslvqwfouucaisdzj ; /usr/bin/python3'
Jan 29 08:35:06 np0005600302.novalocal sudo[5310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:35:06 np0005600302.novalocal python3[5312]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 08:35:06 np0005600302.novalocal sudo[5310]: pam_unix(sudo:session): session closed for user root
Jan 29 08:35:06 np0005600302.novalocal sudo[5383]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjapcjrausrbdiexumbjhqdekhvqucyy ; /usr/bin/python3'
Jan 29 08:35:06 np0005600302.novalocal sudo[5383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:35:06 np0005600302.novalocal python3[5385]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769675705.9265099-21-58515128494361/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:35:06 np0005600302.novalocal sudo[5383]: pam_unix(sudo:session): session closed for user root
Jan 29 08:35:07 np0005600302.novalocal python3[5433]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:07 np0005600302.novalocal python3[5457]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:07 np0005600302.novalocal python3[5481]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:08 np0005600302.novalocal python3[5505]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:08 np0005600302.novalocal python3[5529]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:08 np0005600302.novalocal python3[5553]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:09 np0005600302.novalocal python3[5577]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:09 np0005600302.novalocal python3[5601]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:09 np0005600302.novalocal python3[5625]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:09 np0005600302.novalocal python3[5649]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:10 np0005600302.novalocal python3[5673]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:10 np0005600302.novalocal python3[5697]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:10 np0005600302.novalocal python3[5721]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:10 np0005600302.novalocal python3[5745]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:11 np0005600302.novalocal python3[5769]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:11 np0005600302.novalocal python3[5793]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:11 np0005600302.novalocal python3[5817]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:12 np0005600302.novalocal python3[5841]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:12 np0005600302.novalocal python3[5865]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:12 np0005600302.novalocal python3[5889]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:13 np0005600302.novalocal python3[5913]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:13 np0005600302.novalocal python3[5937]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:13 np0005600302.novalocal python3[5961]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:13 np0005600302.novalocal python3[5985]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:14 np0005600302.novalocal python3[6009]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:14 np0005600302.novalocal python3[6033]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:35:16 np0005600302.novalocal irqbalance[794]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 29 08:35:16 np0005600302.novalocal irqbalance[794]: IRQ 28 affinity is now unmanaged
Jan 29 08:35:17 np0005600302.novalocal sudo[6057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ribtzifzxqexxzocqscfrusdallevvrz ; /usr/bin/python3'
Jan 29 08:35:17 np0005600302.novalocal sudo[6057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:35:17 np0005600302.novalocal python3[6059]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 29 08:35:17 np0005600302.novalocal systemd[1]: Starting Time & Date Service...
Jan 29 08:35:17 np0005600302.novalocal systemd[1]: Started Time & Date Service.
Jan 29 08:35:17 np0005600302.novalocal systemd-timedated[6061]: Changed time zone to 'UTC' (UTC).
Jan 29 08:35:17 np0005600302.novalocal sudo[6057]: pam_unix(sudo:session): session closed for user root
Jan 29 08:35:17 np0005600302.novalocal sudo[6088]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhgoeqbjrkkbzjytcqegppwwtezakuxg ; /usr/bin/python3'
Jan 29 08:35:17 np0005600302.novalocal sudo[6088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:35:17 np0005600302.novalocal python3[6090]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:35:17 np0005600302.novalocal sudo[6088]: pam_unix(sudo:session): session closed for user root
Jan 29 08:35:18 np0005600302.novalocal python3[6166]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 08:35:18 np0005600302.novalocal python3[6237]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769675717.87977-153-25196644157624/source _original_basename=tmpk9dxd3yj follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:35:18 np0005600302.novalocal chronyd[786]: Selected source 216.197.156.83 (2.centos.pool.ntp.org)
Jan 29 08:35:19 np0005600302.novalocal python3[6337]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 08:35:19 np0005600302.novalocal python3[6408]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769675718.804089-183-58428611889944/source _original_basename=tmp8855lqa_ follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:35:19 np0005600302.novalocal sudo[6508]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmbmbajgpirbukjhmxrclkpxcydjtqfl ; /usr/bin/python3'
Jan 29 08:35:19 np0005600302.novalocal sudo[6508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:35:20 np0005600302.novalocal python3[6510]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 08:35:20 np0005600302.novalocal sudo[6508]: pam_unix(sudo:session): session closed for user root
Jan 29 08:35:20 np0005600302.novalocal sudo[6581]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjpgkikafxerrsubyxijuyergvowdmhw ; /usr/bin/python3'
Jan 29 08:35:20 np0005600302.novalocal sudo[6581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:35:20 np0005600302.novalocal python3[6583]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769675719.8914363-231-3757560042933/source _original_basename=tmpc8t7sbrh follow=False checksum=b63507e202a3a3b81cbba68ed76d2afc76b7a992 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:35:20 np0005600302.novalocal sudo[6581]: pam_unix(sudo:session): session closed for user root
Jan 29 08:35:21 np0005600302.novalocal python3[6631]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 08:35:21 np0005600302.novalocal python3[6657]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 08:35:21 np0005600302.novalocal sudo[6735]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aglvbalykjjhehabrorroxeiwpwslxrd ; /usr/bin/python3'
Jan 29 08:35:21 np0005600302.novalocal sudo[6735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:35:21 np0005600302.novalocal python3[6737]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 08:35:21 np0005600302.novalocal sudo[6735]: pam_unix(sudo:session): session closed for user root
Jan 29 08:35:22 np0005600302.novalocal sudo[6808]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvynpdcbwhtoydoepddlrglwzmkfbani ; /usr/bin/python3'
Jan 29 08:35:22 np0005600302.novalocal sudo[6808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:35:22 np0005600302.novalocal python3[6810]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769675721.6239462-273-211093670134795/source _original_basename=tmp09qnirhs follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:35:22 np0005600302.novalocal sudo[6808]: pam_unix(sudo:session): session closed for user root
Jan 29 08:35:22 np0005600302.novalocal sudo[6859]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwstspczuzcgoncmzvtkluxbewfazkpq ; /usr/bin/python3'
Jan 29 08:35:22 np0005600302.novalocal sudo[6859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:35:22 np0005600302.novalocal python3[6861]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-9e29-95d3-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 08:35:22 np0005600302.novalocal sudo[6859]: pam_unix(sudo:session): session closed for user root
Jan 29 08:35:23 np0005600302.novalocal python3[6889]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-9e29-95d3-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 29 08:35:24 np0005600302.novalocal python3[6917]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:35:26 np0005600302.novalocal irqbalance[794]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 29 08:35:26 np0005600302.novalocal irqbalance[794]: IRQ 27 affinity is now unmanaged
Jan 29 08:35:47 np0005600302.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 29 08:35:51 np0005600302.novalocal sudo[6943]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcjtyxdfnmpmzntaegrewiifmetmrqew ; /usr/bin/python3'
Jan 29 08:35:51 np0005600302.novalocal sudo[6943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:35:51 np0005600302.novalocal python3[6945]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:35:51 np0005600302.novalocal sudo[6943]: pam_unix(sudo:session): session closed for user root
Jan 29 08:36:23 np0005600302.novalocal chronyd[786]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Jan 29 08:36:30 np0005600302.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 29 08:36:30 np0005600302.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 29 08:36:30 np0005600302.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 29 08:36:30 np0005600302.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 29 08:36:30 np0005600302.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 29 08:36:30 np0005600302.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 29 08:36:30 np0005600302.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 29 08:36:30 np0005600302.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 29 08:36:30 np0005600302.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 29 08:36:30 np0005600302.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 29 08:36:30 np0005600302.novalocal NetworkManager[852]: <info>  [1769675790.3860] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 29 08:36:30 np0005600302.novalocal systemd-udevd[6947]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 08:36:30 np0005600302.novalocal NetworkManager[852]: <info>  [1769675790.4011] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 08:36:30 np0005600302.novalocal NetworkManager[852]: <info>  [1769675790.4032] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 29 08:36:30 np0005600302.novalocal NetworkManager[852]: <info>  [1769675790.4036] device (eth1): carrier: link connected
Jan 29 08:36:30 np0005600302.novalocal NetworkManager[852]: <info>  [1769675790.4038] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 29 08:36:30 np0005600302.novalocal NetworkManager[852]: <info>  [1769675790.4043] policy: auto-activating connection 'Wired connection 1' (6000cf6a-9993-3a31-bcd5-cf355e28135a)
Jan 29 08:36:30 np0005600302.novalocal NetworkManager[852]: <info>  [1769675790.4046] device (eth1): Activation: starting connection 'Wired connection 1' (6000cf6a-9993-3a31-bcd5-cf355e28135a)
Jan 29 08:36:30 np0005600302.novalocal NetworkManager[852]: <info>  [1769675790.4047] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 08:36:30 np0005600302.novalocal NetworkManager[852]: <info>  [1769675790.4049] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 08:36:30 np0005600302.novalocal NetworkManager[852]: <info>  [1769675790.4053] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 08:36:30 np0005600302.novalocal NetworkManager[852]: <info>  [1769675790.4057] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 29 08:36:31 np0005600302.novalocal python3[6973]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-ef5c-a18b-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 08:36:41 np0005600302.novalocal sudo[7051]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzmzrmzroszauxhbceofzlermauhdcwv ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 29 08:36:41 np0005600302.novalocal sudo[7051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:36:41 np0005600302.novalocal python3[7053]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 08:36:41 np0005600302.novalocal sudo[7051]: pam_unix(sudo:session): session closed for user root
Jan 29 08:36:41 np0005600302.novalocal sudo[7124]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fryzmqelceeritwzjbutfchvrouotipp ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 29 08:36:41 np0005600302.novalocal sudo[7124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:36:41 np0005600302.novalocal python3[7126]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769675800.9579604-102-16477790013073/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=87a62381af64c31ba5995f765c7f60019f152504 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:36:41 np0005600302.novalocal sudo[7124]: pam_unix(sudo:session): session closed for user root
Jan 29 08:36:42 np0005600302.novalocal sudo[7174]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkhtywxdefwkhstfcjqwxqlhbmopwsqn ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 29 08:36:42 np0005600302.novalocal sudo[7174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:36:42 np0005600302.novalocal python3[7176]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 08:36:42 np0005600302.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 29 08:36:42 np0005600302.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 29 08:36:42 np0005600302.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 29 08:36:42 np0005600302.novalocal systemd[1]: Stopping Network Manager...
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[852]: <info>  [1769675802.3537] caught SIGTERM, shutting down normally.
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[852]: <info>  [1769675802.3544] dhcp4 (eth0): canceled DHCP transaction
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[852]: <info>  [1769675802.3544] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[852]: <info>  [1769675802.3544] dhcp4 (eth0): state changed no lease
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[852]: <info>  [1769675802.3547] manager: NetworkManager state is now CONNECTING
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[852]: <info>  [1769675802.3652] dhcp4 (eth1): canceled DHCP transaction
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[852]: <info>  [1769675802.3652] dhcp4 (eth1): state changed no lease
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[852]: <info>  [1769675802.3704] exiting (success)
Jan 29 08:36:42 np0005600302.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 29 08:36:42 np0005600302.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 29 08:36:42 np0005600302.novalocal systemd[1]: Stopped Network Manager.
Jan 29 08:36:42 np0005600302.novalocal systemd[1]: NetworkManager.service: Consumed 1.201s CPU time, 10.0M memory peak.
Jan 29 08:36:42 np0005600302.novalocal systemd[1]: Starting Network Manager...
Jan 29 08:36:42 np0005600302.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4095] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:50915982-a81b-4e96-99dd-0c758f60a157)
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4100] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4151] manager[0x560ea0696000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 29 08:36:42 np0005600302.novalocal systemd[1]: Starting Hostname Service...
Jan 29 08:36:42 np0005600302.novalocal systemd[1]: Started Hostname Service.
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4723] hostname: hostname: using hostnamed
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4727] hostname: static hostname changed from (none) to "np0005600302.novalocal"
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4734] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4739] manager[0x560ea0696000]: rfkill: Wi-Fi hardware radio set enabled
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4739] manager[0x560ea0696000]: rfkill: WWAN hardware radio set enabled
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4769] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4769] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4769] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4770] manager: Networking is enabled by state file
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4772] settings: Loaded settings plugin: keyfile (internal)
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4776] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4796] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4804] dhcp: init: Using DHCP client 'internal'
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4806] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4809] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4813] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4818] device (lo): Activation: starting connection 'lo' (3aa75622-136a-4c98-b79e-e9eaf4c8f693)
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4822] device (eth0): carrier: link connected
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4825] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4827] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4828] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4832] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4836] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4839] device (eth1): carrier: link connected
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4843] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4846] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (6000cf6a-9993-3a31-bcd5-cf355e28135a) (indicated)
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4846] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4849] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4854] device (eth1): Activation: starting connection 'Wired connection 1' (6000cf6a-9993-3a31-bcd5-cf355e28135a)
Jan 29 08:36:42 np0005600302.novalocal systemd[1]: Started Network Manager.
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4858] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4874] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4876] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4877] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4879] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4881] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4883] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4884] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4886] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4891] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4894] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4900] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4903] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4913] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4918] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4922] device (lo): Activation: successful, device activated.
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4934] dhcp4 (eth0): state changed new lease, address=38.102.83.196
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4938] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.4996] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 29 08:36:42 np0005600302.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.5022] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.5023] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.5025] manager: NetworkManager state is now CONNECTED_SITE
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.5027] device (eth0): Activation: successful, device activated.
Jan 29 08:36:42 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675802.5032] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 29 08:36:42 np0005600302.novalocal sudo[7174]: pam_unix(sudo:session): session closed for user root
Jan 29 08:36:42 np0005600302.novalocal python3[7260]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-ef5c-a18b-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 08:36:52 np0005600302.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 29 08:36:58 np0005600302.novalocal systemd[4303]: Starting Mark boot as successful...
Jan 29 08:36:58 np0005600302.novalocal systemd[4303]: Finished Mark boot as successful.
Jan 29 08:37:12 np0005600302.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 29 08:37:27 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675847.5952] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 29 08:37:27 np0005600302.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 29 08:37:27 np0005600302.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 29 08:37:27 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675847.6216] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 29 08:37:27 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675847.6219] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 29 08:37:27 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675847.6226] device (eth1): Activation: successful, device activated.
Jan 29 08:37:27 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675847.6233] manager: startup complete
Jan 29 08:37:27 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675847.6235] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 29 08:37:27 np0005600302.novalocal NetworkManager[7180]: <warn>  [1769675847.6240] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 29 08:37:27 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675847.6249] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 29 08:37:27 np0005600302.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 29 08:37:27 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675847.6372] dhcp4 (eth1): canceled DHCP transaction
Jan 29 08:37:27 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675847.6372] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 29 08:37:27 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675847.6373] dhcp4 (eth1): state changed no lease
Jan 29 08:37:27 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675847.6388] policy: auto-activating connection 'ci-private-network' (f22e3c52-1212-547c-866b-5f2ef677cd2b)
Jan 29 08:37:27 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675847.6393] device (eth1): Activation: starting connection 'ci-private-network' (f22e3c52-1212-547c-866b-5f2ef677cd2b)
Jan 29 08:37:27 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675847.6395] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 08:37:27 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675847.6398] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 08:37:27 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675847.6405] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 08:37:27 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675847.6412] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 08:37:27 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675847.7427] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 08:37:27 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675847.7431] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 08:37:27 np0005600302.novalocal NetworkManager[7180]: <info>  [1769675847.7442] device (eth1): Activation: successful, device activated.
Jan 29 08:37:37 np0005600302.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 29 08:37:42 np0005600302.novalocal sudo[7364]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btokdamqglkqiiuhzxnlubipioaycpki ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 29 08:37:42 np0005600302.novalocal sudo[7364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:37:42 np0005600302.novalocal python3[7366]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 08:37:42 np0005600302.novalocal sudo[7364]: pam_unix(sudo:session): session closed for user root
Jan 29 08:37:42 np0005600302.novalocal sudo[7437]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sckegpejidcqppqjpsjasuufnzlijgoo ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 29 08:37:42 np0005600302.novalocal sudo[7437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:37:42 np0005600302.novalocal python3[7439]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769675862.188986-267-144759009204368/source _original_basename=tmp9xdajjws follow=False checksum=79c071727ded656824f3dae99882edb28326c223 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:37:42 np0005600302.novalocal sudo[7437]: pam_unix(sudo:session): session closed for user root
Jan 29 08:38:42 np0005600302.novalocal sshd-session[4312]: Received disconnect from 38.102.83.114 port 55396:11: disconnected by user
Jan 29 08:38:42 np0005600302.novalocal sshd-session[4312]: Disconnected from user zuul 38.102.83.114 port 55396
Jan 29 08:38:42 np0005600302.novalocal sshd-session[4299]: pam_unix(sshd:session): session closed for user zuul
Jan 29 08:38:42 np0005600302.novalocal systemd-logind[799]: Session 1 logged out. Waiting for processes to exit.
Jan 29 08:39:58 np0005600302.novalocal systemd[4303]: Created slice User Background Tasks Slice.
Jan 29 08:39:58 np0005600302.novalocal systemd[4303]: Starting Cleanup of User's Temporary Files and Directories...
Jan 29 08:39:58 np0005600302.novalocal systemd[4303]: Finished Cleanup of User's Temporary Files and Directories.
Jan 29 08:45:19 np0005600302.novalocal sshd-session[7470]: Accepted publickey for zuul from 38.102.83.114 port 46772 ssh2: RSA SHA256:UVFwpB4pGBKhI2DrodtDDM9jvfvTiEMRDyxyOHUhUhI
Jan 29 08:45:19 np0005600302.novalocal systemd-logind[799]: New session 3 of user zuul.
Jan 29 08:45:19 np0005600302.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 29 08:45:19 np0005600302.novalocal sshd-session[7470]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 08:45:19 np0005600302.novalocal sudo[7497]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsdxowlklvcupvltluuvqozciskkfmer ; /usr/bin/python3'
Jan 29 08:45:19 np0005600302.novalocal sudo[7497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:45:19 np0005600302.novalocal python3[7499]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-1af9-18b4-00000000216d-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 08:45:19 np0005600302.novalocal sudo[7497]: pam_unix(sudo:session): session closed for user root
Jan 29 08:45:19 np0005600302.novalocal sudo[7526]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqsugsiktpolynpnrdxfvluyahbqjopg ; /usr/bin/python3'
Jan 29 08:45:19 np0005600302.novalocal sudo[7526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:45:20 np0005600302.novalocal python3[7528]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:45:20 np0005600302.novalocal sudo[7526]: pam_unix(sudo:session): session closed for user root
Jan 29 08:45:20 np0005600302.novalocal sudo[7552]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbjtjavfaydrraedtzswpjuhkdmcquyo ; /usr/bin/python3'
Jan 29 08:45:20 np0005600302.novalocal sudo[7552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:45:20 np0005600302.novalocal python3[7554]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:45:20 np0005600302.novalocal sudo[7552]: pam_unix(sudo:session): session closed for user root
Jan 29 08:45:20 np0005600302.novalocal sudo[7578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jksbbwulpzwigrfzbltvrtitcldhnfda ; /usr/bin/python3'
Jan 29 08:45:20 np0005600302.novalocal sudo[7578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:45:20 np0005600302.novalocal python3[7580]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:45:20 np0005600302.novalocal sudo[7578]: pam_unix(sudo:session): session closed for user root
Jan 29 08:45:20 np0005600302.novalocal sudo[7604]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frhxksjdfdvdhmmhpldgmzidhbmoewfa ; /usr/bin/python3'
Jan 29 08:45:20 np0005600302.novalocal sudo[7604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:45:20 np0005600302.novalocal python3[7606]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:45:20 np0005600302.novalocal sudo[7604]: pam_unix(sudo:session): session closed for user root
Jan 29 08:45:21 np0005600302.novalocal sudo[7630]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuhdgojtkmbwvmytwhkyrewyzcjynjxr ; /usr/bin/python3'
Jan 29 08:45:21 np0005600302.novalocal sudo[7630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:45:21 np0005600302.novalocal python3[7632]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:45:21 np0005600302.novalocal sudo[7630]: pam_unix(sudo:session): session closed for user root
Jan 29 08:45:21 np0005600302.novalocal sudo[7708]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zobarueuyvzltcvcufoojxscrrririyv ; /usr/bin/python3'
Jan 29 08:45:21 np0005600302.novalocal sudo[7708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:45:22 np0005600302.novalocal python3[7710]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 08:45:22 np0005600302.novalocal sudo[7708]: pam_unix(sudo:session): session closed for user root
Jan 29 08:45:22 np0005600302.novalocal sudo[7781]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuxqujtjsekmljnrukkvrvxgfueldutl ; /usr/bin/python3'
Jan 29 08:45:22 np0005600302.novalocal sudo[7781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:45:22 np0005600302.novalocal python3[7783]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769676321.841593-498-185529898276961/source _original_basename=tmphkp5u4a4 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:45:22 np0005600302.novalocal sudo[7781]: pam_unix(sudo:session): session closed for user root
Jan 29 08:45:23 np0005600302.novalocal sudo[7831]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmhwfvlvfrlchstfslznxxfnwrxqhnlq ; /usr/bin/python3'
Jan 29 08:45:23 np0005600302.novalocal sudo[7831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:45:23 np0005600302.novalocal python3[7833]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 29 08:45:23 np0005600302.novalocal systemd[1]: Reloading.
Jan 29 08:45:23 np0005600302.novalocal systemd-rc-local-generator[7851]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 08:45:23 np0005600302.novalocal sudo[7831]: pam_unix(sudo:session): session closed for user root
Jan 29 08:45:24 np0005600302.novalocal sudo[7887]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqhwrsxpiylhcrvlbhazxiizshrxzfmt ; /usr/bin/python3'
Jan 29 08:45:24 np0005600302.novalocal sudo[7887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:45:25 np0005600302.novalocal python3[7889]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 29 08:45:25 np0005600302.novalocal sudo[7887]: pam_unix(sudo:session): session closed for user root
Jan 29 08:45:25 np0005600302.novalocal sudo[7913]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iihagggoihduyzshbxeqpabspquzfsgg ; /usr/bin/python3'
Jan 29 08:45:25 np0005600302.novalocal sudo[7913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:45:25 np0005600302.novalocal python3[7915]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 08:45:25 np0005600302.novalocal sudo[7913]: pam_unix(sudo:session): session closed for user root
Jan 29 08:45:25 np0005600302.novalocal sudo[7941]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyfgopymvxbuiaqaopqsfalmjivuvacc ; /usr/bin/python3'
Jan 29 08:45:25 np0005600302.novalocal sudo[7941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:45:25 np0005600302.novalocal python3[7943]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 08:45:25 np0005600302.novalocal sudo[7941]: pam_unix(sudo:session): session closed for user root
Jan 29 08:45:25 np0005600302.novalocal sudo[7969]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oygzvhkisgsdvkmuwegpjzlfiuaugsbg ; /usr/bin/python3'
Jan 29 08:45:25 np0005600302.novalocal sudo[7969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:45:26 np0005600302.novalocal python3[7971]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 08:45:26 np0005600302.novalocal sudo[7969]: pam_unix(sudo:session): session closed for user root
Jan 29 08:45:26 np0005600302.novalocal sudo[7997]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywfzydclqlbbvybsczrorqigogayhvwq ; /usr/bin/python3'
Jan 29 08:45:26 np0005600302.novalocal sudo[7997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:45:26 np0005600302.novalocal python3[7999]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 08:45:26 np0005600302.novalocal sudo[7997]: pam_unix(sudo:session): session closed for user root
Jan 29 08:45:27 np0005600302.novalocal python3[8026]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-1af9-18b4-000000002174-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 08:45:27 np0005600302.novalocal python3[8056]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 29 08:45:29 np0005600302.novalocal sshd-session[7473]: Connection closed by 38.102.83.114 port 46772
Jan 29 08:45:29 np0005600302.novalocal sshd-session[7470]: pam_unix(sshd:session): session closed for user zuul
Jan 29 08:45:29 np0005600302.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 29 08:45:29 np0005600302.novalocal systemd[1]: session-3.scope: Consumed 3.526s CPU time.
Jan 29 08:45:29 np0005600302.novalocal systemd-logind[799]: Session 3 logged out. Waiting for processes to exit.
Jan 29 08:45:29 np0005600302.novalocal systemd-logind[799]: Removed session 3.
Jan 29 08:45:31 np0005600302.novalocal sshd-session[8063]: Accepted publickey for zuul from 38.102.83.114 port 54956 ssh2: RSA SHA256:UVFwpB4pGBKhI2DrodtDDM9jvfvTiEMRDyxyOHUhUhI
Jan 29 08:45:31 np0005600302.novalocal systemd-logind[799]: New session 4 of user zuul.
Jan 29 08:45:31 np0005600302.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 29 08:45:31 np0005600302.novalocal sshd-session[8063]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 08:45:31 np0005600302.novalocal sudo[8090]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdsuueoocthgjnlydtssfjyehpjedyjq ; /usr/bin/python3'
Jan 29 08:45:31 np0005600302.novalocal sudo[8090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:45:31 np0005600302.novalocal python3[8092]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 29 08:45:56 np0005600302.novalocal setsebool[8135]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 29 08:45:56 np0005600302.novalocal setsebool[8135]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 29 08:46:07 np0005600302.novalocal kernel: SELinux:  Converting 385 SID table entries...
Jan 29 08:46:07 np0005600302.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 29 08:46:07 np0005600302.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 29 08:46:07 np0005600302.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 29 08:46:07 np0005600302.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 29 08:46:07 np0005600302.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 29 08:46:07 np0005600302.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 29 08:46:07 np0005600302.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 29 08:46:17 np0005600302.novalocal kernel: SELinux:  Converting 388 SID table entries...
Jan 29 08:46:17 np0005600302.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 29 08:46:17 np0005600302.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 29 08:46:17 np0005600302.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 29 08:46:17 np0005600302.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 29 08:46:17 np0005600302.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 29 08:46:17 np0005600302.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 29 08:46:17 np0005600302.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 29 08:46:36 np0005600302.novalocal dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 29 08:46:36 np0005600302.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 29 08:46:36 np0005600302.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 29 08:46:37 np0005600302.novalocal systemd[1]: Reloading.
Jan 29 08:46:37 np0005600302.novalocal systemd-rc-local-generator[8904]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 08:46:37 np0005600302.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 29 08:46:38 np0005600302.novalocal sudo[8090]: pam_unix(sudo:session): session closed for user root
Jan 29 08:46:38 np0005600302.novalocal python3[10788]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-7d74-f7a9-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 08:46:39 np0005600302.novalocal kernel: evm: overlay not supported
Jan 29 08:46:39 np0005600302.novalocal systemd[4303]: Starting D-Bus User Message Bus...
Jan 29 08:46:39 np0005600302.novalocal dbus-broker-launch[12070]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 29 08:46:39 np0005600302.novalocal dbus-broker-launch[12070]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 29 08:46:39 np0005600302.novalocal systemd[4303]: Started D-Bus User Message Bus.
Jan 29 08:46:39 np0005600302.novalocal dbus-broker-lau[12070]: Ready
Jan 29 08:46:39 np0005600302.novalocal systemd[4303]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 29 08:46:39 np0005600302.novalocal systemd[4303]: Created slice Slice /user.
Jan 29 08:46:39 np0005600302.novalocal systemd[4303]: podman-11966.scope: unit configures an IP firewall, but not running as root.
Jan 29 08:46:39 np0005600302.novalocal systemd[4303]: (This warning is only shown for the first unit using IP firewalling.)
Jan 29 08:46:39 np0005600302.novalocal systemd[4303]: Started podman-11966.scope.
Jan 29 08:46:39 np0005600302.novalocal systemd[4303]: Started podman-pause-e43eee91.scope.
Jan 29 08:46:40 np0005600302.novalocal sudo[12756]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inyhboepuppptnzdyktjxxyhxmtgxwpl ; /usr/bin/python3'
Jan 29 08:46:40 np0005600302.novalocal sudo[12756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:46:40 np0005600302.novalocal python3[12758]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.2:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.2:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:46:40 np0005600302.novalocal python3[12758]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 29 08:46:40 np0005600302.novalocal sudo[12756]: pam_unix(sudo:session): session closed for user root
Jan 29 08:46:40 np0005600302.novalocal sshd-session[8066]: Connection closed by 38.102.83.114 port 54956
Jan 29 08:46:40 np0005600302.novalocal sshd-session[8063]: pam_unix(sshd:session): session closed for user zuul
Jan 29 08:46:40 np0005600302.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 29 08:46:40 np0005600302.novalocal systemd[1]: session-4.scope: Consumed 42.500s CPU time.
Jan 29 08:46:40 np0005600302.novalocal systemd-logind[799]: Session 4 logged out. Waiting for processes to exit.
Jan 29 08:46:40 np0005600302.novalocal systemd-logind[799]: Removed session 4.
Jan 29 08:46:59 np0005600302.novalocal sshd-session[24920]: Connection closed by 38.129.56.236 port 37634 [preauth]
Jan 29 08:46:59 np0005600302.novalocal sshd-session[24923]: Connection closed by 38.129.56.236 port 37636 [preauth]
Jan 29 08:46:59 np0005600302.novalocal sshd-session[24919]: Unable to negotiate with 38.129.56.236 port 37640: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 29 08:46:59 np0005600302.novalocal sshd-session[24927]: Unable to negotiate with 38.129.56.236 port 37654: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 29 08:46:59 np0005600302.novalocal sshd-session[24921]: Unable to negotiate with 38.129.56.236 port 37658: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 29 08:47:02 np0005600302.novalocal sshd-session[26399]: Accepted publickey for zuul from 38.102.83.114 port 49038 ssh2: RSA SHA256:UVFwpB4pGBKhI2DrodtDDM9jvfvTiEMRDyxyOHUhUhI
Jan 29 08:47:02 np0005600302.novalocal systemd-logind[799]: New session 5 of user zuul.
Jan 29 08:47:02 np0005600302.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 29 08:47:02 np0005600302.novalocal sshd-session[26399]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 08:47:03 np0005600302.novalocal python3[26483]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNVinzuuT9c1OZkuK9mEWiOQhqSdvp+kcPL0XDrw7SjD5uhmMs2Eecqdl0QzVM8yvVtEDiCdJv0YPdvZSwIHSZY= zuul@np0005600301.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:47:03 np0005600302.novalocal sudo[26647]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeopdlmkrymvlmztjpvqqauswpxhnvuk ; /usr/bin/python3'
Jan 29 08:47:03 np0005600302.novalocal sudo[26647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:47:03 np0005600302.novalocal python3[26658]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNVinzuuT9c1OZkuK9mEWiOQhqSdvp+kcPL0XDrw7SjD5uhmMs2Eecqdl0QzVM8yvVtEDiCdJv0YPdvZSwIHSZY= zuul@np0005600301.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:47:03 np0005600302.novalocal sudo[26647]: pam_unix(sudo:session): session closed for user root
Jan 29 08:47:03 np0005600302.novalocal sudo[26941]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vakpmknxjiwripznpbnwloricdfhfczm ; /usr/bin/python3'
Jan 29 08:47:03 np0005600302.novalocal sudo[26941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:47:04 np0005600302.novalocal python3[26944]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005600302.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 29 08:47:05 np0005600302.novalocal useradd[27019]: new group: name=cloud-admin, GID=1002
Jan 29 08:47:05 np0005600302.novalocal useradd[27019]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 29 08:47:05 np0005600302.novalocal sudo[26941]: pam_unix(sudo:session): session closed for user root
Jan 29 08:47:05 np0005600302.novalocal sudo[27341]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uppbmoqydesmxfosrjvmnasczntofakp ; /usr/bin/python3'
Jan 29 08:47:05 np0005600302.novalocal sudo[27341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:47:05 np0005600302.novalocal python3[27350]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNVinzuuT9c1OZkuK9mEWiOQhqSdvp+kcPL0XDrw7SjD5uhmMs2Eecqdl0QzVM8yvVtEDiCdJv0YPdvZSwIHSZY= zuul@np0005600301.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 08:47:05 np0005600302.novalocal sudo[27341]: pam_unix(sudo:session): session closed for user root
Jan 29 08:47:06 np0005600302.novalocal sudo[27573]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drogaoadvufyelyacypcgiqxvqbvdzta ; /usr/bin/python3'
Jan 29 08:47:06 np0005600302.novalocal sudo[27573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:47:06 np0005600302.novalocal python3[27580]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 08:47:06 np0005600302.novalocal sudo[27573]: pam_unix(sudo:session): session closed for user root
Jan 29 08:47:06 np0005600302.novalocal sudo[27875]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewpyjsmtnmqlvpmjdpvxtuedyuvohwbq ; /usr/bin/python3'
Jan 29 08:47:06 np0005600302.novalocal sudo[27875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:47:06 np0005600302.novalocal python3[27885]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769676426.0654886-135-179896989333024/source _original_basename=tmpbwxf3cos follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:47:06 np0005600302.novalocal sudo[27875]: pam_unix(sudo:session): session closed for user root
Jan 29 08:47:07 np0005600302.novalocal sudo[28281]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjmnhdbfmlrryevmzcyzgsighkumkcnk ; /usr/bin/python3'
Jan 29 08:47:07 np0005600302.novalocal sudo[28281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:47:07 np0005600302.novalocal python3[28290]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 29 08:47:07 np0005600302.novalocal systemd[1]: Starting Hostname Service...
Jan 29 08:47:07 np0005600302.novalocal systemd[1]: Started Hostname Service.
Jan 29 08:47:07 np0005600302.novalocal systemd-hostnamed[28421]: Changed pretty hostname to 'compute-0'
Jan 29 08:47:07 compute-0 systemd-hostnamed[28421]: Hostname set to <compute-0> (static)
Jan 29 08:47:07 compute-0 NetworkManager[7180]: <info>  [1769676427.7108] hostname: static hostname changed from "np0005600302.novalocal" to "compute-0"
Jan 29 08:47:07 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 29 08:47:07 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 29 08:47:07 compute-0 sudo[28281]: pam_unix(sudo:session): session closed for user root
Jan 29 08:47:08 compute-0 sshd-session[26443]: Connection closed by 38.102.83.114 port 49038
Jan 29 08:47:08 compute-0 sshd-session[26399]: pam_unix(sshd:session): session closed for user zuul
Jan 29 08:47:08 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Jan 29 08:47:08 compute-0 systemd[1]: session-5.scope: Consumed 2.137s CPU time.
Jan 29 08:47:08 compute-0 systemd-logind[799]: Session 5 logged out. Waiting for processes to exit.
Jan 29 08:47:08 compute-0 systemd-logind[799]: Removed session 5.
Jan 29 08:47:10 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 29 08:47:10 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 29 08:47:10 compute-0 systemd[1]: man-db-cache-update.service: Consumed 34.693s CPU time.
Jan 29 08:47:10 compute-0 systemd[1]: run-rf6de7651ac0346ac92b6319d2c450712.service: Deactivated successfully.
Jan 29 08:47:17 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 29 08:47:37 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 29 08:49:48 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 29 08:49:48 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 29 08:49:48 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 29 08:49:48 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 29 08:51:30 compute-0 sshd-session[29989]: Accepted publickey for zuul from 38.129.56.236 port 52982 ssh2: RSA SHA256:UVFwpB4pGBKhI2DrodtDDM9jvfvTiEMRDyxyOHUhUhI
Jan 29 08:51:30 compute-0 systemd-logind[799]: New session 6 of user zuul.
Jan 29 08:51:30 compute-0 systemd[1]: Started Session 6 of User zuul.
Jan 29 08:51:30 compute-0 sshd-session[29989]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 08:51:31 compute-0 python3[30065]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 08:51:32 compute-0 sudo[30179]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ergnphwzneynciiddawdcfurdcvlbmye ; /usr/bin/python3'
Jan 29 08:51:32 compute-0 sudo[30179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:51:32 compute-0 python3[30181]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 08:51:32 compute-0 sudo[30179]: pam_unix(sudo:session): session closed for user root
Jan 29 08:51:32 compute-0 sudo[30252]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptpfrhjzwaraoknuwhmgbebkalyxzyzs ; /usr/bin/python3'
Jan 29 08:51:32 compute-0 sudo[30252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:51:33 compute-0 python3[30254]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769676692.3779793-33750-5267411790757/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:51:33 compute-0 sudo[30252]: pam_unix(sudo:session): session closed for user root
Jan 29 08:51:33 compute-0 sudo[30278]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfrryyfgavmobyeikeggytnxheltqefq ; /usr/bin/python3'
Jan 29 08:51:33 compute-0 sudo[30278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:51:33 compute-0 python3[30280]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 08:51:33 compute-0 sudo[30278]: pam_unix(sudo:session): session closed for user root
Jan 29 08:51:33 compute-0 sudo[30351]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymkgsiyaqwqxzkkgehazomzwbgfeiopu ; /usr/bin/python3'
Jan 29 08:51:33 compute-0 sudo[30351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:51:33 compute-0 python3[30353]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769676692.3779793-33750-5267411790757/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:51:33 compute-0 sudo[30351]: pam_unix(sudo:session): session closed for user root
Jan 29 08:51:33 compute-0 sudo[30377]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imysvsfstgkjolegurxnlzwwhuoknrlm ; /usr/bin/python3'
Jan 29 08:51:33 compute-0 sudo[30377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:51:33 compute-0 python3[30379]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 08:51:33 compute-0 sudo[30377]: pam_unix(sudo:session): session closed for user root
Jan 29 08:51:33 compute-0 sudo[30450]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veebcqdwvjztdvfckeouekeantbqzxwd ; /usr/bin/python3'
Jan 29 08:51:33 compute-0 sudo[30450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:51:34 compute-0 python3[30452]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769676692.3779793-33750-5267411790757/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:51:34 compute-0 sudo[30450]: pam_unix(sudo:session): session closed for user root
Jan 29 08:51:34 compute-0 sudo[30476]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnmdubykjrfkpzircnfbszhczmkijbbm ; /usr/bin/python3'
Jan 29 08:51:34 compute-0 sudo[30476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:51:34 compute-0 python3[30478]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 08:51:34 compute-0 sudo[30476]: pam_unix(sudo:session): session closed for user root
Jan 29 08:51:34 compute-0 sudo[30549]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcppspcjrynjbbrhfywdzgprvfwrghsg ; /usr/bin/python3'
Jan 29 08:51:34 compute-0 sudo[30549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:51:34 compute-0 python3[30551]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769676692.3779793-33750-5267411790757/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:51:34 compute-0 sudo[30549]: pam_unix(sudo:session): session closed for user root
Jan 29 08:51:34 compute-0 sudo[30575]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbhzrwtkdqsbffpclpijhivsjkfxsjwh ; /usr/bin/python3'
Jan 29 08:51:34 compute-0 sudo[30575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:51:34 compute-0 python3[30577]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 08:51:34 compute-0 sudo[30575]: pam_unix(sudo:session): session closed for user root
Jan 29 08:51:34 compute-0 sudo[30648]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpioycpyatsnhlmpmhpyfukmxwcslsgl ; /usr/bin/python3'
Jan 29 08:51:34 compute-0 sudo[30648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:51:34 compute-0 python3[30650]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769676692.3779793-33750-5267411790757/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:51:35 compute-0 sudo[30648]: pam_unix(sudo:session): session closed for user root
Jan 29 08:51:35 compute-0 sudo[30674]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnhpwhqoosiwwpmlyodmnmulbsqpvyrz ; /usr/bin/python3'
Jan 29 08:51:35 compute-0 sudo[30674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:51:35 compute-0 python3[30676]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 08:51:35 compute-0 sudo[30674]: pam_unix(sudo:session): session closed for user root
Jan 29 08:51:35 compute-0 sudo[30747]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxyfffpyctcbsyckklyyiydxojjpwjjb ; /usr/bin/python3'
Jan 29 08:51:35 compute-0 sudo[30747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:51:35 compute-0 python3[30749]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769676692.3779793-33750-5267411790757/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:51:35 compute-0 sudo[30747]: pam_unix(sudo:session): session closed for user root
Jan 29 08:51:35 compute-0 sudo[30773]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbddbgkstmlfnlrpygidghokdpykzutl ; /usr/bin/python3'
Jan 29 08:51:35 compute-0 sudo[30773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:51:35 compute-0 python3[30775]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 08:51:35 compute-0 sudo[30773]: pam_unix(sudo:session): session closed for user root
Jan 29 08:51:35 compute-0 sudo[30846]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auazdagqwglubiykxtioufmpnnftsfhc ; /usr/bin/python3'
Jan 29 08:51:35 compute-0 sudo[30846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 08:51:36 compute-0 python3[30848]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769676692.3779793-33750-5267411790757/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 08:51:36 compute-0 sudo[30846]: pam_unix(sudo:session): session closed for user root
Jan 29 08:51:38 compute-0 sshd-session[30874]: Connection closed by 192.168.122.11 port 58988 [preauth]
Jan 29 08:51:38 compute-0 sshd-session[30875]: Unable to negotiate with 192.168.122.11 port 59002: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 29 08:51:38 compute-0 sshd-session[30873]: Unable to negotiate with 192.168.122.11 port 59008: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 29 08:51:38 compute-0 sshd-session[30877]: Connection closed by 192.168.122.11 port 58974 [preauth]
Jan 29 08:51:38 compute-0 sshd-session[30876]: Unable to negotiate with 192.168.122.11 port 58994: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 29 08:51:48 compute-0 python3[30906]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 08:55:58 compute-0 sshd-session[30911]: error: kex_exchange_identification: read: Connection reset by peer
Jan 29 08:55:58 compute-0 sshd-session[30911]: Connection reset by 176.120.22.52 port 50252
Jan 29 08:56:47 compute-0 sshd-session[29992]: Received disconnect from 38.129.56.236 port 52982:11: disconnected by user
Jan 29 08:56:47 compute-0 sshd-session[29992]: Disconnected from user zuul 38.129.56.236 port 52982
Jan 29 08:56:47 compute-0 sshd-session[29989]: pam_unix(sshd:session): session closed for user zuul
Jan 29 08:56:47 compute-0 systemd-logind[799]: Session 6 logged out. Waiting for processes to exit.
Jan 29 08:56:47 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 29 08:56:47 compute-0 systemd[1]: session-6.scope: Consumed 3.816s CPU time.
Jan 29 08:56:47 compute-0 systemd-logind[799]: Removed session 6.
Jan 29 09:01:01 compute-0 CROND[30915]: (root) CMD (run-parts /etc/cron.hourly)
Jan 29 09:01:01 compute-0 run-parts[30918]: (/etc/cron.hourly) starting 0anacron
Jan 29 09:01:01 compute-0 anacron[30926]: Anacron started on 2026-01-29
Jan 29 09:01:01 compute-0 anacron[30926]: Will run job `cron.daily' in 19 min.
Jan 29 09:01:01 compute-0 anacron[30926]: Will run job `cron.weekly' in 39 min.
Jan 29 09:01:01 compute-0 anacron[30926]: Will run job `cron.monthly' in 59 min.
Jan 29 09:01:01 compute-0 anacron[30926]: Jobs will be executed sequentially
Jan 29 09:01:01 compute-0 run-parts[30928]: (/etc/cron.hourly) finished 0anacron
Jan 29 09:01:01 compute-0 CROND[30914]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 29 09:03:48 compute-0 sshd-session[30930]: Accepted publickey for zuul from 192.168.122.30 port 47114 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:03:48 compute-0 systemd-logind[799]: New session 7 of user zuul.
Jan 29 09:03:48 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 29 09:03:48 compute-0 sshd-session[30930]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:03:49 compute-0 python3.9[31083]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:03:50 compute-0 sudo[31262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umewrelpnxvoruvzkpudztbhijyqjpwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677430.193072-27-172526410429097/AnsiballZ_command.py'
Jan 29 09:03:50 compute-0 sudo[31262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:03:50 compute-0 python3.9[31264]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:03:58 compute-0 sudo[31262]: pam_unix(sudo:session): session closed for user root
Jan 29 09:03:58 compute-0 sshd-session[30933]: Connection closed by 192.168.122.30 port 47114
Jan 29 09:03:58 compute-0 sshd-session[30930]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:03:58 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 29 09:03:58 compute-0 systemd[1]: session-7.scope: Consumed 7.580s CPU time.
Jan 29 09:03:58 compute-0 systemd-logind[799]: Session 7 logged out. Waiting for processes to exit.
Jan 29 09:03:58 compute-0 systemd-logind[799]: Removed session 7.
Jan 29 09:04:14 compute-0 sshd-session[31322]: Accepted publickey for zuul from 192.168.122.30 port 60036 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:04:14 compute-0 systemd-logind[799]: New session 8 of user zuul.
Jan 29 09:04:14 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 29 09:04:14 compute-0 sshd-session[31322]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:04:15 compute-0 python3.9[31475]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 29 09:04:16 compute-0 python3.9[31649]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:04:17 compute-0 sudo[31799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwotuqbqdwbmdpfbelglmcvbfaoskmdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677456.6935482-40-144633909274786/AnsiballZ_command.py'
Jan 29 09:04:17 compute-0 sudo[31799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:04:17 compute-0 python3.9[31801]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:04:17 compute-0 sudo[31799]: pam_unix(sudo:session): session closed for user root
Jan 29 09:04:17 compute-0 sudo[31952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkyailnxycctnklwbvfqdlieegwddaet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677457.5601866-52-214763953514400/AnsiballZ_stat.py'
Jan 29 09:04:17 compute-0 sudo[31952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:04:18 compute-0 python3.9[31954]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:04:18 compute-0 sudo[31952]: pam_unix(sudo:session): session closed for user root
Jan 29 09:04:18 compute-0 sudo[32104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vabljwmefxarexwmyukvsfzumpdhypoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677458.2697885-60-208061919429571/AnsiballZ_file.py'
Jan 29 09:04:18 compute-0 sudo[32104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:04:18 compute-0 python3.9[32106]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:04:18 compute-0 sudo[32104]: pam_unix(sudo:session): session closed for user root
Jan 29 09:04:19 compute-0 sudo[32256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sypmjhhnrllbekgnkefhmxxcjgpiwegf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677459.0633166-68-63213551269983/AnsiballZ_stat.py'
Jan 29 09:04:19 compute-0 sudo[32256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:04:19 compute-0 python3.9[32258]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:04:19 compute-0 sudo[32256]: pam_unix(sudo:session): session closed for user root
Jan 29 09:04:19 compute-0 sudo[32379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kunxiqxrujoomhqetysylpgxhwmpehgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677459.0633166-68-63213551269983/AnsiballZ_copy.py'
Jan 29 09:04:19 compute-0 sudo[32379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:04:20 compute-0 python3.9[32381]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769677459.0633166-68-63213551269983/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:04:20 compute-0 sudo[32379]: pam_unix(sudo:session): session closed for user root
Jan 29 09:04:20 compute-0 sudo[32531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljcpldqunumatbqguhjgcafzyemrahyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677460.2846935-83-84721357884519/AnsiballZ_setup.py'
Jan 29 09:04:20 compute-0 sudo[32531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:04:20 compute-0 python3.9[32533]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:04:21 compute-0 sudo[32531]: pam_unix(sudo:session): session closed for user root
Jan 29 09:04:21 compute-0 sudo[32687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhqqglcxsydpvhjfpheczfhwhwzrupfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677461.1345417-91-1542969082253/AnsiballZ_file.py'
Jan 29 09:04:21 compute-0 sudo[32687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:04:21 compute-0 python3.9[32689]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:04:21 compute-0 sudo[32687]: pam_unix(sudo:session): session closed for user root
Jan 29 09:04:21 compute-0 sudo[32839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzzrnqmsvgefrasjhoqirnofzheculgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677461.7237916-100-198445549881475/AnsiballZ_file.py'
Jan 29 09:04:21 compute-0 sudo[32839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:04:22 compute-0 python3.9[32841]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:04:22 compute-0 sudo[32839]: pam_unix(sudo:session): session closed for user root
Jan 29 09:04:22 compute-0 python3.9[32991]: ansible-ansible.builtin.service_facts Invoked
Jan 29 09:04:26 compute-0 python3.9[33244]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:04:27 compute-0 python3.9[33394]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:04:28 compute-0 python3.9[33548]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:04:28 compute-0 sudo[33704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abbkifqeaiysizgbhwrwiegrqffsdpqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677468.5971353-148-34925526401772/AnsiballZ_setup.py'
Jan 29 09:04:28 compute-0 sudo[33704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:04:29 compute-0 python3.9[33706]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 09:04:29 compute-0 sudo[33704]: pam_unix(sudo:session): session closed for user root
Jan 29 09:04:29 compute-0 sudo[33788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjactxjooyiaeqemjdpdgmuiaolfdkfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677468.5971353-148-34925526401772/AnsiballZ_dnf.py'
Jan 29 09:04:29 compute-0 sudo[33788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:04:29 compute-0 python3.9[33790]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:05:11 compute-0 systemd[1]: Reloading.
Jan 29 09:05:11 compute-0 systemd-rc-local-generator[33981]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:05:11 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 29 09:05:11 compute-0 systemd[1]: Reloading.
Jan 29 09:05:12 compute-0 systemd-rc-local-generator[34031]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:05:12 compute-0 systemd[1]: Starting dnf makecache...
Jan 29 09:05:12 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 29 09:05:12 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 29 09:05:12 compute-0 systemd[1]: Reloading.
Jan 29 09:05:12 compute-0 systemd-rc-local-generator[34071]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:05:12 compute-0 dnf[34044]: Failed determining last makecache time.
Jan 29 09:05:12 compute-0 dnf[34044]: delorean-openstack-barbican-42b4c41831408a8e323 153 kB/s | 3.0 kB     00:00
Jan 29 09:05:12 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 29 09:05:12 compute-0 dnf[34044]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 188 kB/s | 3.0 kB     00:00
Jan 29 09:05:12 compute-0 dnf[34044]: delorean-openstack-cinder-1c00d6490d88e436f26ef 183 kB/s | 3.0 kB     00:00
Jan 29 09:05:12 compute-0 dnf[34044]: delorean-python-stevedore-c4acc5639fd2329372142 126 kB/s | 3.0 kB     00:00
Jan 29 09:05:12 compute-0 dnf[34044]: delorean-python-cloudkitty-tests-tempest-2c80f8 144 kB/s | 3.0 kB     00:00
Jan 29 09:05:12 compute-0 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Jan 29 09:05:12 compute-0 dnf[34044]: delorean-os-refresh-config-9bfc52b5049be2d8de61 151 kB/s | 3.0 kB     00:00
Jan 29 09:05:12 compute-0 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Jan 29 09:05:12 compute-0 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Jan 29 09:05:12 compute-0 dnf[34044]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 178 kB/s | 3.0 kB     00:00
Jan 29 09:05:12 compute-0 dnf[34044]: delorean-python-designate-tests-tempest-347fdbc 196 kB/s | 3.0 kB     00:00
Jan 29 09:05:12 compute-0 dnf[34044]: delorean-openstack-glance-1fd12c29b339f30fe823e 177 kB/s | 3.0 kB     00:00
Jan 29 09:05:12 compute-0 dnf[34044]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 176 kB/s | 3.0 kB     00:00
Jan 29 09:05:12 compute-0 dnf[34044]: delorean-openstack-manila-3c01b7181572c95dac462 161 kB/s | 3.0 kB     00:00
Jan 29 09:05:12 compute-0 dnf[34044]: delorean-python-whitebox-neutron-tests-tempest- 153 kB/s | 3.0 kB     00:00
Jan 29 09:05:12 compute-0 dnf[34044]: delorean-openstack-octavia-ba397f07a7331190208c 170 kB/s | 3.0 kB     00:00
Jan 29 09:05:12 compute-0 dnf[34044]: delorean-openstack-watcher-c014f81a8647287f6dcc 170 kB/s | 3.0 kB     00:00
Jan 29 09:05:12 compute-0 dnf[34044]: delorean-ansible-config_template-5ccaa22121a7ff 171 kB/s | 3.0 kB     00:00
Jan 29 09:05:12 compute-0 dnf[34044]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 176 kB/s | 3.0 kB     00:00
Jan 29 09:05:12 compute-0 dnf[34044]: delorean-openstack-swift-dc98a8463506ac520c469a 166 kB/s | 3.0 kB     00:00
Jan 29 09:05:12 compute-0 dnf[34044]: delorean-python-tempestconf-8515371b7cceebd4282 202 kB/s | 3.0 kB     00:00
Jan 29 09:05:12 compute-0 dnf[34044]: delorean-openstack-heat-ui-013accbfd179753bc3f0 217 kB/s | 3.0 kB     00:00
Jan 29 09:05:12 compute-0 dnf[34044]: CentOS Stream 9 - BaseOS                         62 kB/s | 6.4 kB     00:00
Jan 29 09:05:12 compute-0 dnf[34044]: CentOS Stream 9 - AppStream                      62 kB/s | 6.5 kB     00:00
Jan 29 09:05:13 compute-0 dnf[34044]: CentOS Stream 9 - CRB                            62 kB/s | 6.3 kB     00:00
Jan 29 09:05:13 compute-0 dnf[34044]: CentOS Stream 9 - Extras packages                76 kB/s | 7.3 kB     00:00
Jan 29 09:05:13 compute-0 dnf[34044]: dlrn-antelope-testing                           153 kB/s | 3.0 kB     00:00
Jan 29 09:05:13 compute-0 dnf[34044]: dlrn-antelope-build-deps                        164 kB/s | 3.0 kB     00:00
Jan 29 09:05:13 compute-0 dnf[34044]: centos9-rabbitmq                                 47 kB/s | 3.0 kB     00:00
Jan 29 09:05:13 compute-0 dnf[34044]: centos9-storage                                  13 kB/s | 3.0 kB     00:00
Jan 29 09:05:13 compute-0 dnf[34044]: centos9-opstools                                 36 kB/s | 3.0 kB     00:00
Jan 29 09:05:13 compute-0 dnf[34044]: NFV SIG OpenvSwitch                             108 kB/s | 3.0 kB     00:00
Jan 29 09:05:13 compute-0 dnf[34044]: repo-setup-centos-appstream                     214 kB/s | 4.4 kB     00:00
Jan 29 09:05:14 compute-0 dnf[34044]: repo-setup-centos-baseos                        172 kB/s | 3.9 kB     00:00
Jan 29 09:05:14 compute-0 dnf[34044]: repo-setup-centos-highavailability              158 kB/s | 3.9 kB     00:00
Jan 29 09:05:14 compute-0 dnf[34044]: repo-setup-centos-powertools                    202 kB/s | 4.3 kB     00:00
Jan 29 09:05:14 compute-0 dnf[34044]: Extra Packages for Enterprise Linux 9 - x86_64   89 kB/s |  28 kB     00:00
Jan 29 09:05:15 compute-0 dnf[34044]: Metadata cache created.
Jan 29 09:05:15 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 29 09:05:15 compute-0 systemd[1]: Finished dnf makecache.
Jan 29 09:05:15 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.719s CPU time.
Jan 29 09:06:10 compute-0 kernel: SELinux:  Converting 2727 SID table entries...
Jan 29 09:06:10 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 29 09:06:10 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 29 09:06:10 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 29 09:06:10 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 29 09:06:10 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 29 09:06:10 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 29 09:06:10 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 29 09:06:10 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 29 09:06:11 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 29 09:06:11 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 29 09:06:11 compute-0 systemd[1]: Reloading.
Jan 29 09:06:11 compute-0 systemd-rc-local-generator[34430]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:06:11 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 29 09:06:11 compute-0 sudo[33788]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:11 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 29 09:06:11 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 29 09:06:11 compute-0 systemd[1]: run-r2fa8d00b26ae4edd89e99d4adde987d1.service: Deactivated successfully.
Jan 29 09:06:12 compute-0 sudo[35347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgmxuhnoehzeyxtlznqvepkyjbwbrjpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677571.835524-160-21959067601136/AnsiballZ_command.py'
Jan 29 09:06:12 compute-0 sudo[35347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:12 compute-0 python3.9[35349]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:06:13 compute-0 sudo[35347]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:13 compute-0 sudo[35628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siuhpenilerjxmcbonfhfeptskheywxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677573.3577397-168-65623061592079/AnsiballZ_selinux.py'
Jan 29 09:06:13 compute-0 sudo[35628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:14 compute-0 python3.9[35630]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 29 09:06:14 compute-0 sudo[35628]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:14 compute-0 sudo[35780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlraqmtxsnzffkzlqobvsttrabhozkii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677574.5058055-179-161979360609413/AnsiballZ_command.py'
Jan 29 09:06:14 compute-0 sudo[35780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:14 compute-0 python3.9[35782]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 29 09:06:15 compute-0 sudo[35780]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:15 compute-0 sudo[35933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxmqukmdmmjpvbbzwrnupqwxysjvlvuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677575.568485-187-197713197626480/AnsiballZ_file.py'
Jan 29 09:06:15 compute-0 sudo[35933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:16 compute-0 python3.9[35935]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:06:16 compute-0 sudo[35933]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:18 compute-0 sudo[36086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwcdcpoedclonefimlvuwxhwhmxafcuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677576.920314-195-240566352772465/AnsiballZ_mount.py'
Jan 29 09:06:18 compute-0 sudo[36086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:18 compute-0 python3.9[36088]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 29 09:06:18 compute-0 sudo[36086]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:19 compute-0 sudo[36238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhhdyotcrefjfmzsjkeqrcmesnvyotcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677579.6244361-223-249885727798556/AnsiballZ_file.py'
Jan 29 09:06:19 compute-0 sudo[36238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:20 compute-0 python3.9[36240]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:06:20 compute-0 sudo[36238]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:20 compute-0 sudo[36390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrvclxggkejeysficmqnceisbajjcwgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677580.1490624-231-255981857161123/AnsiballZ_stat.py'
Jan 29 09:06:20 compute-0 sudo[36390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:20 compute-0 python3.9[36392]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:06:20 compute-0 sudo[36390]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:20 compute-0 sudo[36513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzyhsqmuzwfkyoeuvtbfugveveyphfdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677580.1490624-231-255981857161123/AnsiballZ_copy.py'
Jan 29 09:06:20 compute-0 sudo[36513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:23 compute-0 python3.9[36515]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769677580.1490624-231-255981857161123/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0a3de624a7429dca03fef8d39cefcc5051a17ce1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:06:23 compute-0 sudo[36513]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:24 compute-0 sudo[36665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umsspjejtbrmqfvlwbnijogrhssvaxek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677584.1430225-255-49364458580802/AnsiballZ_stat.py'
Jan 29 09:06:24 compute-0 sudo[36665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:24 compute-0 python3.9[36667]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:06:24 compute-0 sudo[36665]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:24 compute-0 sudo[36817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyjladfllkkhuswuqkaatiupojqviixm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677584.6503499-263-247603631496343/AnsiballZ_command.py'
Jan 29 09:06:24 compute-0 sudo[36817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:25 compute-0 python3.9[36819]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:06:25 compute-0 sudo[36817]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:25 compute-0 sudo[36970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrlzhhjcvcyaoowbyxcdvbwrjjruakxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677585.2391763-271-248858813221681/AnsiballZ_file.py'
Jan 29 09:06:25 compute-0 sudo[36970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:25 compute-0 python3.9[36972]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:06:25 compute-0 sudo[36970]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:26 compute-0 sudo[37122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uawafqgyiwjdtrwnscmjfjyastpvpmfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677585.9579058-282-105601316850381/AnsiballZ_getent.py'
Jan 29 09:06:26 compute-0 sudo[37122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:26 compute-0 python3.9[37124]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 29 09:06:26 compute-0 sudo[37122]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:26 compute-0 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 09:06:27 compute-0 sudo[37276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjqedaiyjgndzdlcetgegtciyeunkrrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677586.6593049-290-198388153953293/AnsiballZ_group.py'
Jan 29 09:06:27 compute-0 sudo[37276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:27 compute-0 python3.9[37278]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 29 09:06:27 compute-0 groupadd[37279]: group added to /etc/group: name=qemu, GID=107
Jan 29 09:06:27 compute-0 groupadd[37279]: group added to /etc/gshadow: name=qemu
Jan 29 09:06:27 compute-0 groupadd[37279]: new group: name=qemu, GID=107
Jan 29 09:06:27 compute-0 sudo[37276]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:27 compute-0 sudo[37434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oviykkhjywowguusoahcefbcdifxyjue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677587.3599653-298-158433101072175/AnsiballZ_user.py'
Jan 29 09:06:27 compute-0 sudo[37434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:28 compute-0 python3.9[37436]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 29 09:06:28 compute-0 useradd[37438]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 29 09:06:28 compute-0 sudo[37434]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:28 compute-0 sudo[37594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxkmhkqqilixalxhioscnaiwstkgilqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677588.2416003-306-111358338850022/AnsiballZ_getent.py'
Jan 29 09:06:28 compute-0 sudo[37594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:28 compute-0 python3.9[37596]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 29 09:06:28 compute-0 sudo[37594]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:29 compute-0 sudo[37747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuixrgqtxzyhmmwsrhprcrdtgbrqvhya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677588.7728207-314-273142077984803/AnsiballZ_group.py'
Jan 29 09:06:29 compute-0 sudo[37747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:29 compute-0 python3.9[37749]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 29 09:06:29 compute-0 groupadd[37750]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 29 09:06:29 compute-0 groupadd[37750]: group added to /etc/gshadow: name=hugetlbfs
Jan 29 09:06:29 compute-0 groupadd[37750]: new group: name=hugetlbfs, GID=42477
Jan 29 09:06:29 compute-0 sudo[37747]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:29 compute-0 sudo[37905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwzsbdnxynuqxltimwlcffigwcddeixa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677589.402496-323-87727528021800/AnsiballZ_file.py'
Jan 29 09:06:29 compute-0 sudo[37905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:29 compute-0 python3.9[37907]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 29 09:06:29 compute-0 sudo[37905]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:30 compute-0 sudo[38057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnxhcvdhdrskmrlngssapicrlhojnliz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677590.0398445-334-92678599168700/AnsiballZ_dnf.py'
Jan 29 09:06:30 compute-0 sudo[38057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:30 compute-0 python3.9[38059]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:06:32 compute-0 sudo[38057]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:32 compute-0 sudo[38210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmyuqiicrdjsxjlqmkarerhlfnbmidiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677592.1851623-342-144915838858834/AnsiballZ_file.py'
Jan 29 09:06:32 compute-0 sudo[38210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:32 compute-0 python3.9[38212]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:06:32 compute-0 sudo[38210]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:32 compute-0 sudo[38362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iienmrtklgmhsjkwwiyrekorvetldcer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677592.7228823-350-145690458775601/AnsiballZ_stat.py'
Jan 29 09:06:32 compute-0 sudo[38362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:33 compute-0 python3.9[38364]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:06:33 compute-0 sudo[38362]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:33 compute-0 sudo[38485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yovajosvywdtufokpewhbxnldisekysx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677592.7228823-350-145690458775601/AnsiballZ_copy.py'
Jan 29 09:06:33 compute-0 sudo[38485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:33 compute-0 python3.9[38487]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769677592.7228823-350-145690458775601/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:06:33 compute-0 sudo[38485]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:34 compute-0 sudo[38637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqhcaqthkyxcvexjtoojyniqdjinzbgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677593.7608564-365-244043345342653/AnsiballZ_systemd.py'
Jan 29 09:06:34 compute-0 sudo[38637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:34 compute-0 python3.9[38639]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 09:06:34 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 29 09:06:34 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 29 09:06:34 compute-0 kernel: Bridge firewalling registered
Jan 29 09:06:34 compute-0 systemd-modules-load[38643]: Inserted module 'br_netfilter'
Jan 29 09:06:34 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 29 09:06:34 compute-0 sudo[38637]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:35 compute-0 sudo[38797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsyuzmsqqbrbnqffezrboozdbscgaiww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677595.00447-373-109142357587416/AnsiballZ_stat.py'
Jan 29 09:06:35 compute-0 sudo[38797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:35 compute-0 python3.9[38799]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:06:35 compute-0 sudo[38797]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:35 compute-0 sudo[38920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtfgforqopkrdmyoyqfhcevfawuzpokj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677595.00447-373-109142357587416/AnsiballZ_copy.py'
Jan 29 09:06:35 compute-0 sudo[38920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:35 compute-0 python3.9[38922]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769677595.00447-373-109142357587416/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:06:35 compute-0 sudo[38920]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:36 compute-0 sudo[39072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbmcymvrfijrsxlzlginuaxsfchrpryo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677596.1209986-391-186359471822117/AnsiballZ_dnf.py'
Jan 29 09:06:36 compute-0 sudo[39072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:36 compute-0 python3.9[39074]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:06:39 compute-0 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Jan 29 09:06:39 compute-0 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Jan 29 09:06:39 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 29 09:06:39 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 29 09:06:39 compute-0 systemd[1]: Reloading.
Jan 29 09:06:39 compute-0 systemd-rc-local-generator[39133]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:06:39 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 29 09:06:40 compute-0 sudo[39072]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:40 compute-0 python3.9[40539]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:06:41 compute-0 python3.9[41563]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 29 09:06:42 compute-0 python3.9[42329]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:06:42 compute-0 sudo[43256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itnzqrsykjywdfovnewgtatvoomkjmuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677602.3602035-430-39333273422759/AnsiballZ_command.py'
Jan 29 09:06:42 compute-0 sudo[43256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:42 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 29 09:06:42 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 29 09:06:42 compute-0 systemd[1]: man-db-cache-update.service: Consumed 3.873s CPU time.
Jan 29 09:06:42 compute-0 systemd[1]: run-rfe06009b7fbe415a8f316e2d0d548964.service: Deactivated successfully.
Jan 29 09:06:42 compute-0 python3.9[43279]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:06:42 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 29 09:06:43 compute-0 systemd[1]: Starting Authorization Manager...
Jan 29 09:06:43 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 29 09:06:43 compute-0 polkitd[43507]: Started polkitd version 0.117
Jan 29 09:06:43 compute-0 polkitd[43507]: Loading rules from directory /etc/polkit-1/rules.d
Jan 29 09:06:43 compute-0 polkitd[43507]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 29 09:06:43 compute-0 polkitd[43507]: Finished loading, compiling and executing 2 rules
Jan 29 09:06:43 compute-0 polkitd[43507]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 29 09:06:43 compute-0 systemd[1]: Started Authorization Manager.
Jan 29 09:06:43 compute-0 sudo[43256]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:43 compute-0 sudo[43675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnolitmrsezvhwxvrenufwwipiocraiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677603.511054-439-176572736590955/AnsiballZ_systemd.py'
Jan 29 09:06:43 compute-0 sudo[43675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:44 compute-0 python3.9[43677]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:06:44 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 29 09:06:44 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 29 09:06:44 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 29 09:06:44 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 29 09:06:44 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 29 09:06:44 compute-0 sudo[43675]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:44 compute-0 python3.9[43839]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 29 09:06:46 compute-0 sudo[43989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryfvnermsedegmmnjhsiybrzskzdjkcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677606.3170185-496-94526658188862/AnsiballZ_systemd.py'
Jan 29 09:06:46 compute-0 sudo[43989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:46 compute-0 irqbalance[794]: Cannot change IRQ 33 affinity: Operation not permitted
Jan 29 09:06:46 compute-0 irqbalance[794]: IRQ 33 affinity is now unmanaged
Jan 29 09:06:46 compute-0 python3.9[43991]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:06:46 compute-0 systemd[1]: Reloading.
Jan 29 09:06:46 compute-0 systemd-rc-local-generator[44021]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:06:47 compute-0 sudo[43989]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:47 compute-0 sudo[44178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmmagkcamnzkyauofepeuzgflthxaqge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677607.1704502-496-86125765130178/AnsiballZ_systemd.py'
Jan 29 09:06:47 compute-0 sudo[44178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:47 compute-0 python3.9[44180]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:06:47 compute-0 systemd[1]: Reloading.
Jan 29 09:06:47 compute-0 systemd-rc-local-generator[44205]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:06:47 compute-0 sudo[44178]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:48 compute-0 sudo[44366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmshgakqlzbhkjnhzsyvvpwvebqjqhvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677608.0930974-512-154000426160213/AnsiballZ_command.py'
Jan 29 09:06:48 compute-0 sudo[44366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:48 compute-0 python3.9[44368]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:06:48 compute-0 sudo[44366]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:48 compute-0 sudo[44519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhztnvleupbmptbnbcmgsqrvimqecwlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677608.614581-520-170722063285626/AnsiballZ_command.py'
Jan 29 09:06:48 compute-0 sudo[44519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:49 compute-0 python3.9[44521]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:06:49 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 29 09:06:49 compute-0 sudo[44519]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:49 compute-0 sudo[44672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sygosxyuqacwvstgjxipykjmmisrutyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677609.226215-528-13075396311392/AnsiballZ_command.py'
Jan 29 09:06:49 compute-0 sudo[44672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:49 compute-0 python3.9[44674]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:06:50 compute-0 sudo[44672]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:51 compute-0 sudo[44834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iajvnhzbkfuomocetjzcsstqiweqdzed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677611.0641189-536-4324312802535/AnsiballZ_command.py'
Jan 29 09:06:51 compute-0 sudo[44834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:51 compute-0 python3.9[44836]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:06:51 compute-0 sudo[44834]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:51 compute-0 sudo[44987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgmleytpientilczpnsnggguvskrzlgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677611.5841966-544-79956457624206/AnsiballZ_systemd.py'
Jan 29 09:06:51 compute-0 sudo[44987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:06:52 compute-0 python3.9[44989]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 09:06:52 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 29 09:06:52 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 29 09:06:52 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 29 09:06:52 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 29 09:06:52 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 29 09:06:52 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 29 09:06:52 compute-0 sudo[44987]: pam_unix(sudo:session): session closed for user root
Jan 29 09:06:52 compute-0 sshd-session[31325]: Connection closed by 192.168.122.30 port 60036
Jan 29 09:06:52 compute-0 sshd-session[31322]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:06:52 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 29 09:06:52 compute-0 systemd[1]: session-8.scope: Consumed 2min 1.045s CPU time.
Jan 29 09:06:52 compute-0 systemd-logind[799]: Session 8 logged out. Waiting for processes to exit.
Jan 29 09:06:52 compute-0 systemd-logind[799]: Removed session 8.
Jan 29 09:06:58 compute-0 sshd-session[45021]: Accepted publickey for zuul from 192.168.122.30 port 58514 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:06:58 compute-0 systemd-logind[799]: New session 9 of user zuul.
Jan 29 09:06:58 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 29 09:06:58 compute-0 sshd-session[45021]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:06:59 compute-0 python3.9[45174]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:07:00 compute-0 sudo[45328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byyztpxkxzreggcbuehnboztkapfwtni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677619.734229-31-175362144492614/AnsiballZ_getent.py'
Jan 29 09:07:00 compute-0 sudo[45328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:00 compute-0 python3.9[45330]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 29 09:07:00 compute-0 sudo[45328]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:00 compute-0 sudo[45481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dixjgrvdbmhidmhzfzhdjqshyldchwvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677620.4591684-39-14516342781336/AnsiballZ_group.py'
Jan 29 09:07:00 compute-0 sudo[45481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:01 compute-0 python3.9[45483]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 29 09:07:01 compute-0 groupadd[45484]: group added to /etc/group: name=openvswitch, GID=42476
Jan 29 09:07:01 compute-0 groupadd[45484]: group added to /etc/gshadow: name=openvswitch
Jan 29 09:07:01 compute-0 groupadd[45484]: new group: name=openvswitch, GID=42476
Jan 29 09:07:01 compute-0 sudo[45481]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:01 compute-0 sudo[45639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irvvgqcagougzaazbhsbkwwqqfmhezwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677621.2115293-47-99253393163268/AnsiballZ_user.py'
Jan 29 09:07:01 compute-0 sudo[45639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:01 compute-0 python3.9[45641]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 29 09:07:01 compute-0 useradd[45643]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 29 09:07:01 compute-0 useradd[45643]: add 'openvswitch' to group 'hugetlbfs'
Jan 29 09:07:01 compute-0 useradd[45643]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 29 09:07:02 compute-0 sudo[45639]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:02 compute-0 sudo[45799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vteguxejeuorrqtufchnqxnxwegevlue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677622.2299056-57-146569128515927/AnsiballZ_setup.py'
Jan 29 09:07:02 compute-0 sudo[45799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:02 compute-0 python3.9[45801]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 09:07:02 compute-0 sudo[45799]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:03 compute-0 sudo[45883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axaubvwrrxdaavzvuatgvlyznmvaublv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677622.2299056-57-146569128515927/AnsiballZ_dnf.py'
Jan 29 09:07:03 compute-0 sudo[45883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:03 compute-0 python3.9[45885]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 29 09:07:05 compute-0 sudo[45883]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:05 compute-0 sudo[46047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdjdhiwfffhruniduywjtkiztdrmvzlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677625.7738214-71-164720214430538/AnsiballZ_dnf.py'
Jan 29 09:07:05 compute-0 sudo[46047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:06 compute-0 python3.9[46049]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:07:17 compute-0 kernel: SELinux:  Converting 2739 SID table entries...
Jan 29 09:07:17 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 29 09:07:17 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 29 09:07:17 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 29 09:07:17 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 29 09:07:17 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 29 09:07:17 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 29 09:07:17 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 29 09:07:17 compute-0 groupadd[46072]: group added to /etc/group: name=unbound, GID=994
Jan 29 09:07:17 compute-0 groupadd[46072]: group added to /etc/gshadow: name=unbound
Jan 29 09:07:17 compute-0 groupadd[46072]: new group: name=unbound, GID=994
Jan 29 09:07:17 compute-0 useradd[46079]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 29 09:07:17 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 29 09:07:17 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 29 09:07:18 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 29 09:07:18 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 29 09:07:18 compute-0 systemd[1]: Reloading.
Jan 29 09:07:18 compute-0 systemd-rc-local-generator[46570]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:07:18 compute-0 systemd-sysv-generator[46577]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:07:18 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 29 09:07:19 compute-0 sudo[46047]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:19 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 29 09:07:19 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 29 09:07:19 compute-0 systemd[1]: run-r889278e731af4a089ed2ca11f3dc1f33.service: Deactivated successfully.
Jan 29 09:07:20 compute-0 sudo[47147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arywihphocpayuhlcaaqifjmcubnbgkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677639.4274013-79-131015011844164/AnsiballZ_systemd.py'
Jan 29 09:07:20 compute-0 sudo[47147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:20 compute-0 python3.9[47149]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 29 09:07:20 compute-0 systemd[1]: Reloading.
Jan 29 09:07:20 compute-0 systemd-rc-local-generator[47176]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:07:20 compute-0 systemd-sysv-generator[47183]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:07:20 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 29 09:07:20 compute-0 chown[47191]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 29 09:07:20 compute-0 ovs-ctl[47196]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 29 09:07:20 compute-0 ovs-ctl[47196]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 29 09:07:20 compute-0 ovs-ctl[47196]: Starting ovsdb-server [  OK  ]
Jan 29 09:07:20 compute-0 ovs-vsctl[47245]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 29 09:07:20 compute-0 ovs-vsctl[47264]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"347a774e-f56f-46e9-8fb5-240ce07d1693\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 29 09:07:20 compute-0 ovs-ctl[47196]: Configuring Open vSwitch system IDs [  OK  ]
Jan 29 09:07:20 compute-0 ovs-ctl[47196]: Enabling remote OVSDB managers [  OK  ]
Jan 29 09:07:20 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 29 09:07:20 compute-0 ovs-vsctl[47270]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 29 09:07:20 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 29 09:07:20 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 29 09:07:20 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 29 09:07:20 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 29 09:07:20 compute-0 ovs-ctl[47314]: Inserting openvswitch module [  OK  ]
Jan 29 09:07:21 compute-0 ovs-ctl[47283]: Starting ovs-vswitchd [  OK  ]
Jan 29 09:07:21 compute-0 ovs-vsctl[47332]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 29 09:07:21 compute-0 ovs-ctl[47283]: Enabling remote OVSDB managers [  OK  ]
Jan 29 09:07:21 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 29 09:07:21 compute-0 systemd[1]: Starting Open vSwitch...
Jan 29 09:07:21 compute-0 systemd[1]: Finished Open vSwitch.
Jan 29 09:07:21 compute-0 sudo[47147]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:21 compute-0 python3.9[47483]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:07:22 compute-0 sudo[47633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujtuwmukjqckqidywmlhzsjuyzgrpuva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677642.0100186-97-197575988490594/AnsiballZ_sefcontext.py'
Jan 29 09:07:22 compute-0 sudo[47633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:22 compute-0 python3.9[47635]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 29 09:07:23 compute-0 kernel: SELinux:  Converting 2753 SID table entries...
Jan 29 09:07:23 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 29 09:07:23 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 29 09:07:23 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 29 09:07:23 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 29 09:07:23 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 29 09:07:23 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 29 09:07:23 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 29 09:07:23 compute-0 sudo[47633]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:24 compute-0 python3.9[47790]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:07:25 compute-0 sudo[47946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxeqpvtsttinmvctsvgkdungwmzwyami ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677644.963096-115-117010922567981/AnsiballZ_dnf.py'
Jan 29 09:07:25 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 29 09:07:25 compute-0 sudo[47946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:25 compute-0 python3.9[47948]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:07:26 compute-0 sudo[47946]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:27 compute-0 sudo[48099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmiwrhfdkwngisapmuzrmhwddqlgmwey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677646.733443-123-128203162430352/AnsiballZ_command.py'
Jan 29 09:07:27 compute-0 sudo[48099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:27 compute-0 python3.9[48101]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:07:27 compute-0 sudo[48099]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:28 compute-0 sudo[48386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pahldjduzhxrnxwdzqvfcwfizawxzuob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677648.015686-131-254693297159674/AnsiballZ_file.py'
Jan 29 09:07:28 compute-0 sudo[48386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:28 compute-0 python3.9[48388]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 29 09:07:28 compute-0 sudo[48386]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:29 compute-0 python3.9[48538]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:07:29 compute-0 sudo[48690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgjoodmmrvrgomvnovzvzmakatkqbsik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677649.5139658-147-265566389257876/AnsiballZ_dnf.py'
Jan 29 09:07:29 compute-0 sudo[48690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:29 compute-0 python3.9[48692]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:07:31 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 29 09:07:31 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 29 09:07:31 compute-0 systemd[1]: Reloading.
Jan 29 09:07:31 compute-0 systemd-sysv-generator[48735]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:07:31 compute-0 systemd-rc-local-generator[48732]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:07:31 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 29 09:07:32 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 29 09:07:32 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 29 09:07:32 compute-0 systemd[1]: run-r5eadeb59e7804d4db05a1ea192827e8a.service: Deactivated successfully.
Jan 29 09:07:32 compute-0 sudo[48690]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:32 compute-0 sudo[49008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciyqibahlukfmtqjdoawzvjfdfftiuou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677652.4110317-155-242568341568716/AnsiballZ_systemd.py'
Jan 29 09:07:32 compute-0 sudo[49008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:32 compute-0 python3.9[49010]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 09:07:32 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 29 09:07:32 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 29 09:07:32 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 29 09:07:32 compute-0 systemd[1]: Stopping Network Manager...
Jan 29 09:07:32 compute-0 NetworkManager[7180]: <info>  [1769677652.9443] caught SIGTERM, shutting down normally.
Jan 29 09:07:32 compute-0 NetworkManager[7180]: <info>  [1769677652.9459] dhcp4 (eth0): canceled DHCP transaction
Jan 29 09:07:32 compute-0 NetworkManager[7180]: <info>  [1769677652.9459] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 29 09:07:32 compute-0 NetworkManager[7180]: <info>  [1769677652.9459] dhcp4 (eth0): state changed no lease
Jan 29 09:07:32 compute-0 NetworkManager[7180]: <info>  [1769677652.9462] manager: NetworkManager state is now CONNECTED_SITE
Jan 29 09:07:32 compute-0 NetworkManager[7180]: <info>  [1769677652.9520] exiting (success)
Jan 29 09:07:32 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 29 09:07:32 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 29 09:07:32 compute-0 systemd[1]: Stopped Network Manager.
Jan 29 09:07:32 compute-0 systemd[1]: NetworkManager.service: Consumed 13.726s CPU time, 4.1M memory peak, read 0B from disk, written 33.0K to disk.
Jan 29 09:07:32 compute-0 systemd[1]: Starting Network Manager...
Jan 29 09:07:32 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 29 09:07:32 compute-0 NetworkManager[49019]: <info>  [1769677652.9995] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:50915982-a81b-4e96-99dd-0c758f60a157)
Jan 29 09:07:32 compute-0 NetworkManager[49019]: <info>  [1769677652.9999] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0046] manager[0x56552c3d5000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 29 09:07:33 compute-0 systemd[1]: Starting Hostname Service...
Jan 29 09:07:33 compute-0 systemd[1]: Started Hostname Service.
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0683] hostname: hostname: using hostnamed
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0684] hostname: static hostname changed from (none) to "compute-0"
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0689] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0693] manager[0x56552c3d5000]: rfkill: Wi-Fi hardware radio set enabled
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0693] manager[0x56552c3d5000]: rfkill: WWAN hardware radio set enabled
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0710] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0718] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0718] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0719] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0719] manager: Networking is enabled by state file
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0722] settings: Loaded settings plugin: keyfile (internal)
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0725] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0749] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0757] dhcp: init: Using DHCP client 'internal'
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0759] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0763] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0769] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0774] device (lo): Activation: starting connection 'lo' (3aa75622-136a-4c98-b79e-e9eaf4c8f693)
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0779] device (eth0): carrier: link connected
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0782] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0786] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0787] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0792] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0797] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0800] device (eth1): carrier: link connected
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0803] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0808] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (f22e3c52-1212-547c-866b-5f2ef677cd2b) (indicated)
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0808] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0812] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0817] device (eth1): Activation: starting connection 'ci-private-network' (f22e3c52-1212-547c-866b-5f2ef677cd2b)
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0822] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 29 09:07:33 compute-0 systemd[1]: Started Network Manager.
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0828] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0832] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0834] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0836] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0847] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0849] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0851] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0853] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0857] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0859] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0867] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0879] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0890] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0893] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0898] device (lo): Activation: successful, device activated.
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0904] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0905] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0908] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0911] device (eth1): Activation: successful, device activated.
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0918] dhcp4 (eth0): state changed new lease, address=38.102.83.196
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.0931] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 29 09:07:33 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 29 09:07:33 compute-0 sudo[49008]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.1427] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.1498] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.1500] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.1503] manager: NetworkManager state is now CONNECTED_SITE
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.1506] device (eth0): Activation: successful, device activated.
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.1511] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 29 09:07:33 compute-0 NetworkManager[49019]: <info>  [1769677653.1586] manager: startup complete
Jan 29 09:07:33 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 29 09:07:33 compute-0 sudo[49234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knjenfnddzbbqqvwybsxjdthupvkqswe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677653.2657568-163-198559130379045/AnsiballZ_dnf.py'
Jan 29 09:07:33 compute-0 sudo[49234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:33 compute-0 python3.9[49236]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:07:37 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 29 09:07:37 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 29 09:07:37 compute-0 systemd[1]: Reloading.
Jan 29 09:07:37 compute-0 systemd-rc-local-generator[49287]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:07:37 compute-0 systemd-sysv-generator[49291]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:07:37 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 29 09:07:38 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 29 09:07:38 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 29 09:07:38 compute-0 systemd[1]: run-r6066ad546b16465ea65563321ba41a3d.service: Deactivated successfully.
Jan 29 09:07:38 compute-0 sudo[49234]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:39 compute-0 sudo[49694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flqrjmdmspjswvhilxlxdoztttqlmiwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677659.0792215-175-159455456166757/AnsiballZ_stat.py'
Jan 29 09:07:39 compute-0 sudo[49694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:39 compute-0 python3.9[49696]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:07:39 compute-0 sudo[49694]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:40 compute-0 sudo[49846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysayyiqsdszhgnjpfryobzrwtebqnqfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677659.8155096-184-27712862284007/AnsiballZ_ini_file.py'
Jan 29 09:07:40 compute-0 sudo[49846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:40 compute-0 python3.9[49848]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:07:40 compute-0 sudo[49846]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:40 compute-0 sudo[50000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfupzxiuwnoavizxevuzyzxiwsqfbqlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677660.6488762-194-56773609801078/AnsiballZ_ini_file.py'
Jan 29 09:07:40 compute-0 sudo[50000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:41 compute-0 python3.9[50002]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:07:41 compute-0 sudo[50000]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:41 compute-0 sudo[50152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdvkqkzhebizoitxwyxmmsglgtjmlxok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677661.1661465-194-194596557460329/AnsiballZ_ini_file.py'
Jan 29 09:07:41 compute-0 sudo[50152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:41 compute-0 python3.9[50154]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:07:41 compute-0 sudo[50152]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:41 compute-0 sudo[50304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilmbuannheqljweyztecfyasmwywtaaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677661.6800802-209-91055415284623/AnsiballZ_ini_file.py'
Jan 29 09:07:41 compute-0 sudo[50304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:42 compute-0 python3.9[50306]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:07:42 compute-0 sudo[50304]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:42 compute-0 sudo[50456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrlvrfvywgxcclzyquhmhkyulrvtrtkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677662.204548-209-19905461313845/AnsiballZ_ini_file.py'
Jan 29 09:07:42 compute-0 sudo[50456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:42 compute-0 python3.9[50458]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:07:42 compute-0 sudo[50456]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:43 compute-0 sudo[50608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvapzrjwkbqjwihvegsrkjvcbrmmdmbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677662.7646437-224-216099140452239/AnsiballZ_stat.py'
Jan 29 09:07:43 compute-0 sudo[50608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:43 compute-0 python3.9[50610]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:07:43 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 29 09:07:43 compute-0 sudo[50608]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:43 compute-0 sudo[50731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlmpjnxpvziatveerhwhzcbvvzmszakb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677662.7646437-224-216099140452239/AnsiballZ_copy.py'
Jan 29 09:07:43 compute-0 sudo[50731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:43 compute-0 python3.9[50733]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769677662.7646437-224-216099140452239/.source _original_basename=.scv18ecc follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:07:43 compute-0 sudo[50731]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:44 compute-0 sudo[50883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndaeufdrlkxyygkuupscfmdgcprhzixz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677664.0233288-239-196380233920426/AnsiballZ_file.py'
Jan 29 09:07:44 compute-0 sudo[50883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:44 compute-0 python3.9[50885]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:07:44 compute-0 sudo[50883]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:45 compute-0 sudo[51035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzkrhshabqfdmmdttnvpmantilkfblca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677664.6267598-247-17066186851434/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 29 09:07:45 compute-0 sudo[51035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:45 compute-0 python3.9[51037]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 29 09:07:45 compute-0 sudo[51035]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:45 compute-0 sudo[51187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfonyvwdfqbchdghfomrhovynioercwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677665.4404953-256-259462291280024/AnsiballZ_file.py'
Jan 29 09:07:45 compute-0 sudo[51187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:45 compute-0 python3.9[51189]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:07:45 compute-0 sudo[51187]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:46 compute-0 sudo[51339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ortqzkkwnjcagxxcxjnjyrxrkrpmxubs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677666.1238165-266-26649088260937/AnsiballZ_stat.py'
Jan 29 09:07:46 compute-0 sudo[51339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:46 compute-0 sudo[51339]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:46 compute-0 sudo[51462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svbieuuetnhaordytvbzcxgdajmltflp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677666.1238165-266-26649088260937/AnsiballZ_copy.py'
Jan 29 09:07:46 compute-0 sudo[51462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:47 compute-0 sudo[51462]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:47 compute-0 sudo[51614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcuyxybzsfxkevgctwsowatbukommlsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677667.2591553-281-17894817246571/AnsiballZ_slurp.py'
Jan 29 09:07:47 compute-0 sudo[51614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:47 compute-0 python3.9[51616]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 29 09:07:47 compute-0 sudo[51614]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:48 compute-0 sudo[51789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eibrmytfyqvjulexgorkpngsphwjyvwr ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677668.0039847-290-5203919279201/async_wrapper.py j797714443466 300 /home/zuul/.ansible/tmp/ansible-tmp-1769677668.0039847-290-5203919279201/AnsiballZ_edpm_os_net_config.py _'
Jan 29 09:07:48 compute-0 sudo[51789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:48 compute-0 ansible-async_wrapper.py[51791]: Invoked with j797714443466 300 /home/zuul/.ansible/tmp/ansible-tmp-1769677668.0039847-290-5203919279201/AnsiballZ_edpm_os_net_config.py _
Jan 29 09:07:48 compute-0 ansible-async_wrapper.py[51794]: Starting module and watcher
Jan 29 09:07:48 compute-0 ansible-async_wrapper.py[51794]: Start watching 51795 (300)
Jan 29 09:07:48 compute-0 ansible-async_wrapper.py[51795]: Start module (51795)
Jan 29 09:07:48 compute-0 ansible-async_wrapper.py[51791]: Return async_wrapper task started.
Jan 29 09:07:48 compute-0 sudo[51789]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:49 compute-0 python3.9[51796]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 29 09:07:49 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 29 09:07:49 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 29 09:07:49 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 29 09:07:49 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 29 09:07:49 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 29 09:07:50 compute-0 NetworkManager[49019]: <info>  [1769677670.9874] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51797 uid=0 result="success"
Jan 29 09:07:50 compute-0 NetworkManager[49019]: <info>  [1769677670.9903] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51797 uid=0 result="success"
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0485] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0488] audit: op="connection-add" uuid="9ef07f59-0d30-433c-9210-51c639197676" name="br-ex-br" pid=51797 uid=0 result="success"
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0504] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0506] audit: op="connection-add" uuid="ef195876-b17f-4c07-9550-f919d6591984" name="br-ex-port" pid=51797 uid=0 result="success"
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0520] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0522] audit: op="connection-add" uuid="5b6e774f-a7b5-4079-a0a1-683446e7092b" name="eth1-port" pid=51797 uid=0 result="success"
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0535] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0536] audit: op="connection-add" uuid="d20ccf56-8dcb-4a33-ae6d-58ef754ced50" name="vlan20-port" pid=51797 uid=0 result="success"
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0548] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0549] audit: op="connection-add" uuid="1eea7322-051b-4ece-a928-6e069734b488" name="vlan21-port" pid=51797 uid=0 result="success"
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0560] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0561] audit: op="connection-add" uuid="de997d16-d6e2-42dc-8833-1080889a76a8" name="vlan22-port" pid=51797 uid=0 result="success"
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0571] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0572] audit: op="connection-add" uuid="f3869322-afce-4a0f-b6dc-52cc0916a2f1" name="vlan23-port" pid=51797 uid=0 result="success"
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0592] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout" pid=51797 uid=0 result="success"
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0609] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0610] audit: op="connection-add" uuid="db2ca846-eb7f-45b8-b19c-8bbae1ee6917" name="br-ex-if" pid=51797 uid=0 result="success"
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0639] audit: op="connection-update" uuid="f22e3c52-1212-547c-866b-5f2ef677cd2b" name="ci-private-network" args="ovs-interface.type,connection.port-type,connection.controller,connection.timestamp,connection.master,connection.slave-type,ipv4.addresses,ipv4.dns,ipv4.never-default,ipv4.method,ipv4.routes,ipv4.routing-rules,ipv6.addresses,ipv6.dns,ipv6.method,ipv6.addr-gen-mode,ipv6.routes,ipv6.routing-rules,ovs-external-ids.data" pid=51797 uid=0 result="success"
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0654] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0656] audit: op="connection-add" uuid="ea8c122c-0046-4c92-bbf3-02a7a061e990" name="vlan20-if" pid=51797 uid=0 result="success"
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0668] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0670] audit: op="connection-add" uuid="14438087-aaf3-4c43-b2ee-c4d6582e4207" name="vlan21-if" pid=51797 uid=0 result="success"
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0688] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0689] audit: op="connection-add" uuid="eb322d20-997b-4302-bc30-509bbb6b8ec6" name="vlan22-if" pid=51797 uid=0 result="success"
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0712] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0714] audit: op="connection-add" uuid="b50494d9-b135-43ab-b33e-13f50476a5d8" name="vlan23-if" pid=51797 uid=0 result="success"
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0729] audit: op="connection-delete" uuid="6000cf6a-9993-3a31-bcd5-cf355e28135a" name="Wired connection 1" pid=51797 uid=0 result="success"
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0742] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <warn>  [1769677671.0745] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0754] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0759] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (9ef07f59-0d30-433c-9210-51c639197676)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0760] audit: op="connection-activate" uuid="9ef07f59-0d30-433c-9210-51c639197676" name="br-ex-br" pid=51797 uid=0 result="success"
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0763] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <warn>  [1769677671.0764] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0770] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0776] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (ef195876-b17f-4c07-9550-f919d6591984)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0778] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <warn>  [1769677671.0779] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0784] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0790] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (5b6e774f-a7b5-4079-a0a1-683446e7092b)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0792] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <warn>  [1769677671.0794] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0802] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0808] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (d20ccf56-8dcb-4a33-ae6d-58ef754ced50)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0811] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <warn>  [1769677671.0821] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0830] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0835] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (1eea7322-051b-4ece-a928-6e069734b488)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0837] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <warn>  [1769677671.0838] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0844] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0850] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (de997d16-d6e2-42dc-8833-1080889a76a8)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0853] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <warn>  [1769677671.0854] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0859] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0863] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (f3869322-afce-4a0f-b6dc-52cc0916a2f1)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0863] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0866] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0868] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0875] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <warn>  [1769677671.0876] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0879] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0884] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (db2ca846-eb7f-45b8-b19c-8bbae1ee6917)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0885] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0889] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0890] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0892] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0893] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0904] device (eth1): disconnecting for new activation request.
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0905] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0910] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0911] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0912] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0918] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <warn>  [1769677671.0919] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0923] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0927] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (ea8c122c-0046-4c92-bbf3-02a7a061e990)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0928] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0931] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0933] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0935] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0938] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <warn>  [1769677671.0939] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0943] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0947] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (14438087-aaf3-4c43-b2ee-c4d6582e4207)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0948] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0951] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0953] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0955] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0958] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <warn>  [1769677671.0959] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0962] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0967] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (eb322d20-997b-4302-bc30-509bbb6b8ec6)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0968] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0971] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0973] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0974] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0977] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <warn>  [1769677671.0978] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0982] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0987] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (b50494d9-b135-43ab-b33e-13f50476a5d8)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0987] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0991] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0993] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0994] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.0996] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1009] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=51797 uid=0 result="success"
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1012] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1016] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1018] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1025] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1029] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1034] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1038] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1041] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1046] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1050] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1053] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1056] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1061] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1067] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 systemd-udevd[51801]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 09:07:51 compute-0 kernel: Timeout policy base is empty
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1071] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1073] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1078] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1083] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1086] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1088] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1092] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1097] dhcp4 (eth0): canceled DHCP transaction
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1097] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1097] dhcp4 (eth0): state changed no lease
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1100] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1112] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1123] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51797 uid=0 result="fail" reason="Device is not activated"
Jan 29 09:07:51 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1183] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1188] dhcp4 (eth0): state changed new lease, address=38.102.83.196
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1239] device (eth1): disconnecting for new activation request.
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1240] audit: op="connection-activate" uuid="f22e3c52-1212-547c-866b-5f2ef677cd2b" name="ci-private-network" pid=51797 uid=0 result="success"
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1240] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 29 09:07:51 compute-0 kernel: br-ex: entered promiscuous mode
Jan 29 09:07:51 compute-0 kernel: vlan21: entered promiscuous mode
Jan 29 09:07:51 compute-0 systemd-udevd[51803]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1392] device (eth1): Activation: starting connection 'ci-private-network' (f22e3c52-1212-547c-866b-5f2ef677cd2b)
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1397] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1405] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 29 09:07:51 compute-0 kernel: vlan20: entered promiscuous mode
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1431] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1441] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1444] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1450] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1454] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1463] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1467] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1468] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1469] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1471] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1472] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1473] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1475] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51797 uid=0 result="success"
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1480] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1484] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1492] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1493] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1495] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1497] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1500] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1503] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1505] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1508] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1510] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 29 09:07:51 compute-0 kernel: vlan22: entered promiscuous mode
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1513] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1516] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1518] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1521] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1527] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1530] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1545] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1547] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1560] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1570] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1572] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1577] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1582] device (eth1): Activation: successful, device activated.
Jan 29 09:07:51 compute-0 systemd-udevd[51802]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1590] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 kernel: vlan23: entered promiscuous mode
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1592] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1596] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1856] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1857] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1859] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1862] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1872] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1877] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1883] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1887] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1907] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1914] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1929] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1930] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1934] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1941] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1943] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 09:07:51 compute-0 NetworkManager[49019]: <info>  [1769677671.1946] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 29 09:07:52 compute-0 NetworkManager[49019]: <info>  [1769677672.3530] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51797 uid=0 result="success"
Jan 29 09:07:52 compute-0 sudo[52152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gacibxyjqjnhgmzikedfjthbimujxbob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677671.9343417-290-255531047840568/AnsiballZ_async_status.py'
Jan 29 09:07:52 compute-0 sudo[52152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:52 compute-0 NetworkManager[49019]: <info>  [1769677672.4973] checkpoint[0x56552c3aa950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 29 09:07:52 compute-0 NetworkManager[49019]: <info>  [1769677672.4976] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51797 uid=0 result="success"
Jan 29 09:07:52 compute-0 python3.9[52154]: ansible-ansible.legacy.async_status Invoked with jid=j797714443466.51791 mode=status _async_dir=/root/.ansible_async
Jan 29 09:07:52 compute-0 sudo[52152]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:52 compute-0 NetworkManager[49019]: <info>  [1769677672.7889] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51797 uid=0 result="success"
Jan 29 09:07:52 compute-0 NetworkManager[49019]: <info>  [1769677672.7905] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51797 uid=0 result="success"
Jan 29 09:07:53 compute-0 NetworkManager[49019]: <info>  [1769677673.0066] audit: op="networking-control" arg="global-dns-configuration" pid=51797 uid=0 result="success"
Jan 29 09:07:53 compute-0 NetworkManager[49019]: <info>  [1769677673.0087] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 29 09:07:53 compute-0 NetworkManager[49019]: <info>  [1769677673.0109] audit: op="networking-control" arg="global-dns-configuration" pid=51797 uid=0 result="success"
Jan 29 09:07:53 compute-0 NetworkManager[49019]: <info>  [1769677673.0132] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51797 uid=0 result="success"
Jan 29 09:07:53 compute-0 NetworkManager[49019]: <info>  [1769677673.1332] checkpoint[0x56552c3aaa20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 29 09:07:53 compute-0 NetworkManager[49019]: <info>  [1769677673.1336] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51797 uid=0 result="success"
Jan 29 09:07:53 compute-0 ansible-async_wrapper.py[51795]: Module complete (51795)
Jan 29 09:07:53 compute-0 ansible-async_wrapper.py[51794]: Done in kid B.
Jan 29 09:07:55 compute-0 sudo[52258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfwkjxumtiraglyruijovzhsdhqlfrrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677671.9343417-290-255531047840568/AnsiballZ_async_status.py'
Jan 29 09:07:55 compute-0 sudo[52258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:55 compute-0 python3.9[52260]: ansible-ansible.legacy.async_status Invoked with jid=j797714443466.51791 mode=status _async_dir=/root/.ansible_async
Jan 29 09:07:55 compute-0 sudo[52258]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:56 compute-0 sudo[52357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrczzusswbquovtqiyrbiztnegfafszy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677671.9343417-290-255531047840568/AnsiballZ_async_status.py'
Jan 29 09:07:56 compute-0 sudo[52357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:56 compute-0 python3.9[52359]: ansible-ansible.legacy.async_status Invoked with jid=j797714443466.51791 mode=cleanup _async_dir=/root/.ansible_async
Jan 29 09:07:56 compute-0 sudo[52357]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:56 compute-0 sudo[52510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mphdygjpyhswigexdkoypvhoiaytgqec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677676.599676-317-10421204413171/AnsiballZ_stat.py'
Jan 29 09:07:56 compute-0 sudo[52510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:57 compute-0 python3.9[52512]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:07:57 compute-0 sudo[52510]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:57 compute-0 sudo[52633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktxztpkzcspmdvpwwcedletlfimehndb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677676.599676-317-10421204413171/AnsiballZ_copy.py'
Jan 29 09:07:57 compute-0 sudo[52633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:57 compute-0 python3.9[52635]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769677676.599676-317-10421204413171/.source.returncode _original_basename=.o__pd3vs follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:07:57 compute-0 sudo[52633]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:57 compute-0 sudo[52785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izvuuebkabvbidcntmnvtwpmtbtimqyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677677.7659507-333-74928716851984/AnsiballZ_stat.py'
Jan 29 09:07:57 compute-0 sudo[52785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:58 compute-0 python3.9[52787]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:07:58 compute-0 sudo[52785]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:58 compute-0 sudo[52908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlkvncrfzempveyzxeluwwpnqeamdmic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677677.7659507-333-74928716851984/AnsiballZ_copy.py'
Jan 29 09:07:58 compute-0 sudo[52908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:58 compute-0 python3.9[52910]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769677677.7659507-333-74928716851984/.source.cfg _original_basename=.mgszdy2f follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:07:58 compute-0 sudo[52908]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:59 compute-0 sudo[53060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xddazwtxbbpsgsxcrrqsxiccfrsouwsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677678.847757-348-84034191138535/AnsiballZ_systemd.py'
Jan 29 09:07:59 compute-0 sudo[53060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:07:59 compute-0 python3.9[53062]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 09:07:59 compute-0 systemd[1]: Reloading Network Manager...
Jan 29 09:07:59 compute-0 NetworkManager[49019]: <info>  [1769677679.4312] audit: op="reload" arg="0" pid=53066 uid=0 result="success"
Jan 29 09:07:59 compute-0 NetworkManager[49019]: <info>  [1769677679.4318] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 29 09:07:59 compute-0 systemd[1]: Reloaded Network Manager.
Jan 29 09:07:59 compute-0 sudo[53060]: pam_unix(sudo:session): session closed for user root
Jan 29 09:07:59 compute-0 sshd-session[45024]: Connection closed by 192.168.122.30 port 58514
Jan 29 09:07:59 compute-0 sshd-session[45021]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:07:59 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 29 09:07:59 compute-0 systemd[1]: session-9.scope: Consumed 43.584s CPU time.
Jan 29 09:07:59 compute-0 systemd-logind[799]: Session 9 logged out. Waiting for processes to exit.
Jan 29 09:07:59 compute-0 systemd-logind[799]: Removed session 9.
Jan 29 09:08:03 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 29 09:08:05 compute-0 sshd-session[53099]: Accepted publickey for zuul from 192.168.122.30 port 57522 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:08:05 compute-0 systemd-logind[799]: New session 10 of user zuul.
Jan 29 09:08:05 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 29 09:08:05 compute-0 sshd-session[53099]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:08:06 compute-0 python3.9[53253]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:08:07 compute-0 python3.9[53407]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 09:08:08 compute-0 python3.9[53600]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:08:08 compute-0 sshd-session[53102]: Connection closed by 192.168.122.30 port 57522
Jan 29 09:08:08 compute-0 sshd-session[53099]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:08:08 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 29 09:08:08 compute-0 systemd[1]: session-10.scope: Consumed 1.860s CPU time.
Jan 29 09:08:08 compute-0 systemd-logind[799]: Session 10 logged out. Waiting for processes to exit.
Jan 29 09:08:08 compute-0 systemd-logind[799]: Removed session 10.
Jan 29 09:08:09 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 29 09:08:14 compute-0 sshd-session[53629]: Accepted publickey for zuul from 192.168.122.30 port 59580 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:08:14 compute-0 systemd-logind[799]: New session 11 of user zuul.
Jan 29 09:08:14 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 29 09:08:14 compute-0 sshd-session[53629]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:08:15 compute-0 python3.9[53782]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:08:16 compute-0 python3.9[53937]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:08:17 compute-0 sudo[54091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghkzidgvnroayvfpkbkqxutlevbpuuze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677696.810295-35-45629777071823/AnsiballZ_setup.py'
Jan 29 09:08:17 compute-0 sudo[54091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:17 compute-0 python3.9[54093]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 09:08:17 compute-0 sudo[54091]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:17 compute-0 sudo[54175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-engpucwsamwyvlmgiemzqmssfyyuetiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677696.810295-35-45629777071823/AnsiballZ_dnf.py'
Jan 29 09:08:17 compute-0 sudo[54175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:18 compute-0 python3.9[54177]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:08:19 compute-0 sudo[54175]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:19 compute-0 sudo[54329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sltejdiarwwjrvgcubwkmxftfwgxtpxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677699.5355458-47-54769111215083/AnsiballZ_setup.py'
Jan 29 09:08:19 compute-0 sudo[54329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:20 compute-0 python3.9[54331]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 09:08:20 compute-0 sudo[54329]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:21 compute-0 sudo[54524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qselqbaxrinrgxxoktzykzxhsrqlskue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677700.679603-58-53218570345543/AnsiballZ_file.py'
Jan 29 09:08:21 compute-0 sudo[54524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:21 compute-0 python3.9[54526]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:08:21 compute-0 sudo[54524]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:21 compute-0 sudo[54676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxlrjrxcwhhhksribihepwegclsfpkfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677701.431909-66-198564907395621/AnsiballZ_command.py'
Jan 29 09:08:21 compute-0 sudo[54676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:22 compute-0 python3.9[54678]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:08:22 compute-0 podman[54680]: 2026-01-29 09:08:22.058407522 +0000 UTC m=+0.042466163 system refresh
Jan 29 09:08:22 compute-0 sudo[54676]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:22 compute-0 sudo[54840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcmvheuyjxfcwimxjcyoeagwicarpuub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677702.2231581-74-257601344143933/AnsiballZ_stat.py'
Jan 29 09:08:22 compute-0 sudo[54840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:22 compute-0 python3.9[54842]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:08:22 compute-0 sudo[54840]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:23 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 09:08:23 compute-0 sudo[54963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtjncybofvelazzqeprhpupcvdpqgduj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677702.2231581-74-257601344143933/AnsiballZ_copy.py'
Jan 29 09:08:23 compute-0 sudo[54963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:23 compute-0 python3.9[54965]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769677702.2231581-74-257601344143933/.source.json follow=False _original_basename=podman_network_config.j2 checksum=71700789973f08c03c47b3f42b07f1afea2d80d9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:08:23 compute-0 sudo[54963]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:23 compute-0 sudo[55115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mflejorhlypuoywawlmtovrkwnqrcvxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677703.7034962-89-180976757362538/AnsiballZ_stat.py'
Jan 29 09:08:23 compute-0 sudo[55115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:24 compute-0 python3.9[55117]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:08:24 compute-0 sudo[55115]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:24 compute-0 sudo[55238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-womhuspuadteipvxuqcbpfpbmdconlrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677703.7034962-89-180976757362538/AnsiballZ_copy.py'
Jan 29 09:08:24 compute-0 sudo[55238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:24 compute-0 python3.9[55240]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769677703.7034962-89-180976757362538/.source.conf follow=False _original_basename=registries.conf.j2 checksum=51dca2f6e7d675b0597f23a4e044edd3f4faff03 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:08:24 compute-0 sudo[55238]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:25 compute-0 sudo[55390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhfselwtwbmxkylgnslokoaiwicajnwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677704.764213-105-59704372262827/AnsiballZ_ini_file.py'
Jan 29 09:08:25 compute-0 sudo[55390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:25 compute-0 python3.9[55392]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:08:25 compute-0 sudo[55390]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:25 compute-0 sudo[55542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqdeczmffnqfpcxtvwepmrajemgrvcbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677705.4849477-105-104303177592682/AnsiballZ_ini_file.py'
Jan 29 09:08:25 compute-0 sudo[55542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:25 compute-0 python3.9[55544]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:08:25 compute-0 sudo[55542]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:26 compute-0 sudo[55694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aejyzmrzcpmydgfjgyctyticjjkdpmbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677706.0019584-105-85711743157176/AnsiballZ_ini_file.py'
Jan 29 09:08:26 compute-0 sudo[55694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:26 compute-0 python3.9[55696]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:08:26 compute-0 sudo[55694]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:26 compute-0 sudo[55846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inghfvuxkfdxfyyydhuxpedcxtqmppjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677706.5444276-105-260773992707880/AnsiballZ_ini_file.py'
Jan 29 09:08:26 compute-0 sudo[55846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:26 compute-0 python3.9[55848]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:08:26 compute-0 sudo[55846]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:27 compute-0 sudo[55998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkakudqpdsuwwiggsbnoklzosxlqkmfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677707.1786568-136-164185497896134/AnsiballZ_dnf.py'
Jan 29 09:08:27 compute-0 sudo[55998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:27 compute-0 python3.9[56000]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:08:28 compute-0 sudo[55998]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:29 compute-0 sudo[56151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjssvobsvcssernfguglpgjtlzywsukn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677709.3082204-147-178510402714790/AnsiballZ_setup.py'
Jan 29 09:08:29 compute-0 sudo[56151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:29 compute-0 python3.9[56153]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:08:29 compute-0 sudo[56151]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:30 compute-0 sudo[56305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slsmlexsfuhtwtdxdauwzdnuapdduvko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677709.9654229-155-112422642668779/AnsiballZ_stat.py'
Jan 29 09:08:30 compute-0 sudo[56305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:30 compute-0 python3.9[56307]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:08:30 compute-0 sudo[56305]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:30 compute-0 sudo[56457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyygwclrwckzbswwqqgwqjdrkyreenkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677710.5717683-164-65385704075381/AnsiballZ_stat.py'
Jan 29 09:08:30 compute-0 sudo[56457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:30 compute-0 python3.9[56459]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:08:30 compute-0 sudo[56457]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:31 compute-0 sudo[56609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxmeibuexujlobysntwihvexozglrktj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677711.2165217-174-203138629952676/AnsiballZ_command.py'
Jan 29 09:08:31 compute-0 sudo[56609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:31 compute-0 python3.9[56611]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:08:31 compute-0 sudo[56609]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:32 compute-0 sudo[56762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzgjoeewstnemvdoogstlwqhvabuwwka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677711.873337-184-272932102237583/AnsiballZ_service_facts.py'
Jan 29 09:08:32 compute-0 sudo[56762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:32 compute-0 python3.9[56764]: ansible-service_facts Invoked
Jan 29 09:08:32 compute-0 network[56781]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 29 09:08:32 compute-0 network[56782]: 'network-scripts' will be removed from distribution in near future.
Jan 29 09:08:32 compute-0 network[56783]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 29 09:08:34 compute-0 sudo[56762]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:35 compute-0 sudo[57066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilkszhqbohcluivoenrsghzazionbeuc ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769677715.0000591-199-168422073079193/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769677715.0000591-199-168422073079193/args'
Jan 29 09:08:35 compute-0 sudo[57066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:35 compute-0 sudo[57066]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:35 compute-0 sudo[57233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwwskcvfingsipdefulghumreppybfqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677715.5684004-210-254090603239450/AnsiballZ_dnf.py'
Jan 29 09:08:35 compute-0 sudo[57233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:36 compute-0 python3.9[57235]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:08:37 compute-0 sudo[57233]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:38 compute-0 sudo[57386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgdqmqyiuzfkklkixkhaybatoxduitkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677717.5863008-223-38537866351913/AnsiballZ_package_facts.py'
Jan 29 09:08:38 compute-0 sudo[57386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:38 compute-0 python3.9[57388]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 29 09:08:38 compute-0 sudo[57386]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:39 compute-0 sudo[57538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcertoreqtkzfbmsqmtafclebsqwyvgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677719.1016142-233-159397720383244/AnsiballZ_stat.py'
Jan 29 09:08:39 compute-0 sudo[57538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:39 compute-0 python3.9[57540]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:08:39 compute-0 sudo[57538]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:39 compute-0 sudo[57663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bibtaukjqgbnejsuqrnjtqahopjafgrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677719.1016142-233-159397720383244/AnsiballZ_copy.py'
Jan 29 09:08:39 compute-0 sudo[57663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:40 compute-0 python3.9[57665]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769677719.1016142-233-159397720383244/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:08:40 compute-0 sudo[57663]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:40 compute-0 sudo[57817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iplipmsxfmtqbouxgirplctvtwbyzciy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677720.3469872-248-250042258466717/AnsiballZ_stat.py'
Jan 29 09:08:40 compute-0 sudo[57817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:40 compute-0 python3.9[57819]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:08:40 compute-0 sudo[57817]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:41 compute-0 sudo[57942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjpfjchmwscfjwvydeuddgwysjwiiwmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677720.3469872-248-250042258466717/AnsiballZ_copy.py'
Jan 29 09:08:41 compute-0 sudo[57942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:41 compute-0 python3.9[57944]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769677720.3469872-248-250042258466717/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:08:41 compute-0 sudo[57942]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:42 compute-0 sudo[58096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geuftwsknusuibrzfgssgmrgmlhfzqpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677721.6544733-269-168800306645890/AnsiballZ_lineinfile.py'
Jan 29 09:08:42 compute-0 sudo[58096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:42 compute-0 python3.9[58098]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:08:42 compute-0 sudo[58096]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:42 compute-0 sudo[58250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-citygndaayrfleyqzdwypxqrtdkgrjxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677722.7086685-284-49884102707404/AnsiballZ_setup.py'
Jan 29 09:08:42 compute-0 sudo[58250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:43 compute-0 python3.9[58252]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 09:08:43 compute-0 sudo[58250]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:44 compute-0 sudo[58334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmxnunfrifrvxuhzutrkxucbhpbdxywj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677722.7086685-284-49884102707404/AnsiballZ_systemd.py'
Jan 29 09:08:44 compute-0 sudo[58334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:44 compute-0 python3.9[58336]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:08:44 compute-0 sudo[58334]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:45 compute-0 sudo[58488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-betixnwqrpbyxhovipdezaanqmycivry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677725.1531858-300-265876293999215/AnsiballZ_setup.py'
Jan 29 09:08:45 compute-0 sudo[58488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:45 compute-0 python3.9[58490]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 09:08:46 compute-0 sudo[58488]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:46 compute-0 sudo[58572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brgopcdmlcyoxevcdsqpvhtpykoccmup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677725.1531858-300-265876293999215/AnsiballZ_systemd.py'
Jan 29 09:08:46 compute-0 sudo[58572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:46 compute-0 python3.9[58574]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 09:08:46 compute-0 chronyd[786]: chronyd exiting
Jan 29 09:08:46 compute-0 systemd[1]: Stopping NTP client/server...
Jan 29 09:08:46 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 29 09:08:46 compute-0 systemd[1]: Stopped NTP client/server.
Jan 29 09:08:46 compute-0 systemd[1]: Starting NTP client/server...
Jan 29 09:08:46 compute-0 chronyd[58582]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 29 09:08:46 compute-0 chronyd[58582]: Frequency -26.884 +/- 0.355 ppm read from /var/lib/chrony/drift
Jan 29 09:08:46 compute-0 chronyd[58582]: Loaded seccomp filter (level 2)
Jan 29 09:08:46 compute-0 systemd[1]: Started NTP client/server.
Jan 29 09:08:46 compute-0 sudo[58572]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:47 compute-0 sshd-session[53632]: Connection closed by 192.168.122.30 port 59580
Jan 29 09:08:47 compute-0 sshd-session[53629]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:08:47 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 29 09:08:47 compute-0 systemd[1]: session-11.scope: Consumed 21.970s CPU time.
Jan 29 09:08:47 compute-0 systemd-logind[799]: Session 11 logged out. Waiting for processes to exit.
Jan 29 09:08:47 compute-0 systemd-logind[799]: Removed session 11.
Jan 29 09:08:53 compute-0 sshd-session[58608]: Accepted publickey for zuul from 192.168.122.30 port 51970 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:08:53 compute-0 systemd-logind[799]: New session 12 of user zuul.
Jan 29 09:08:53 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 29 09:08:53 compute-0 sshd-session[58608]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:08:54 compute-0 sudo[58761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fytmxjhtuvslcreuthxhvedqschaqlzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677733.8257213-17-45191767947207/AnsiballZ_file.py'
Jan 29 09:08:54 compute-0 sudo[58761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:54 compute-0 python3.9[58763]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:08:54 compute-0 sudo[58761]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:55 compute-0 sudo[58913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htjdxyfkscszpgenjyrxodqeqslaaenm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677734.6478992-29-274756129285802/AnsiballZ_stat.py'
Jan 29 09:08:55 compute-0 sudo[58913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:55 compute-0 python3.9[58915]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:08:55 compute-0 sudo[58913]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:55 compute-0 sudo[59036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okeyiuodswgbxknpuuzwpulmxjpmtjwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677734.6478992-29-274756129285802/AnsiballZ_copy.py'
Jan 29 09:08:55 compute-0 sudo[59036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:08:55 compute-0 python3.9[59038]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769677734.6478992-29-274756129285802/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:08:55 compute-0 sudo[59036]: pam_unix(sudo:session): session closed for user root
Jan 29 09:08:56 compute-0 sshd-session[58611]: Connection closed by 192.168.122.30 port 51970
Jan 29 09:08:56 compute-0 sshd-session[58608]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:08:56 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 29 09:08:56 compute-0 systemd[1]: session-12.scope: Consumed 1.288s CPU time.
Jan 29 09:08:56 compute-0 systemd-logind[799]: Session 12 logged out. Waiting for processes to exit.
Jan 29 09:08:56 compute-0 systemd-logind[799]: Removed session 12.
Jan 29 09:09:01 compute-0 sshd-session[59063]: Accepted publickey for zuul from 192.168.122.30 port 39332 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:09:01 compute-0 systemd-logind[799]: New session 13 of user zuul.
Jan 29 09:09:01 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 29 09:09:01 compute-0 sshd-session[59063]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:09:02 compute-0 python3.9[59216]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:09:02 compute-0 sudo[59370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waxbflmjfysxjsqjmnivihednfjqpsep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677742.5628908-28-46308354124158/AnsiballZ_file.py'
Jan 29 09:09:02 compute-0 sudo[59370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:03 compute-0 python3.9[59372]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:03 compute-0 sudo[59370]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:03 compute-0 sudo[59545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqnwlozlfzgcjugbiwvbsijyohvpujif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677743.2698283-36-190923829799741/AnsiballZ_stat.py'
Jan 29 09:09:03 compute-0 sudo[59545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:03 compute-0 python3.9[59547]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:09:03 compute-0 sudo[59545]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:04 compute-0 sudo[59668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbuqmvtvdpveuughkwbhusxbucvxrucu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677743.2698283-36-190923829799741/AnsiballZ_copy.py'
Jan 29 09:09:04 compute-0 sudo[59668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:04 compute-0 python3.9[59670]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769677743.2698283-36-190923829799741/.source.json _original_basename=.m5p27mok follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:04 compute-0 sudo[59668]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:05 compute-0 sudo[59820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjqeycgknlklajiakgxeckuhsydtnvoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677744.9445455-59-123446273526753/AnsiballZ_stat.py'
Jan 29 09:09:05 compute-0 sudo[59820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:05 compute-0 python3.9[59822]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:09:05 compute-0 sudo[59820]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:05 compute-0 sudo[59943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrcocnovwujuuxhhfawcjfnyitswucjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677744.9445455-59-123446273526753/AnsiballZ_copy.py'
Jan 29 09:09:05 compute-0 sudo[59943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:05 compute-0 python3.9[59945]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769677744.9445455-59-123446273526753/.source _original_basename=.7y86fypz follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:05 compute-0 sudo[59943]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:06 compute-0 sudo[60095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upagcittprzotukmnavywwqlxuishheo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677746.0221267-75-120382416695189/AnsiballZ_file.py'
Jan 29 09:09:06 compute-0 sudo[60095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:06 compute-0 python3.9[60097]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:09:06 compute-0 sudo[60095]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:06 compute-0 sudo[60247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmstocllgkaaitksrfcmqwrgazyvaeku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677746.6174674-83-20362117501556/AnsiballZ_stat.py'
Jan 29 09:09:06 compute-0 sudo[60247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:07 compute-0 python3.9[60249]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:09:07 compute-0 sudo[60247]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:07 compute-0 sudo[60370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffmsmvlipbkrtlixsaengtrwkwgxvfqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677746.6174674-83-20362117501556/AnsiballZ_copy.py'
Jan 29 09:09:07 compute-0 sudo[60370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:07 compute-0 python3.9[60372]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769677746.6174674-83-20362117501556/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:09:07 compute-0 sudo[60370]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:07 compute-0 sudo[60522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lafvwmkjrjebddqprhecrjycjalywqby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677747.67789-83-168188557343255/AnsiballZ_stat.py'
Jan 29 09:09:07 compute-0 sudo[60522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:08 compute-0 python3.9[60524]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:09:08 compute-0 sudo[60522]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:08 compute-0 sudo[60645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynvexlajixrmrrocockejvljquwwvcdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677747.67789-83-168188557343255/AnsiballZ_copy.py'
Jan 29 09:09:08 compute-0 sudo[60645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:08 compute-0 python3.9[60647]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769677747.67789-83-168188557343255/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:09:08 compute-0 sudo[60645]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:08 compute-0 sudo[60797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbpiphagtmhtznpfptfsqdaflphybugo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677748.7178414-112-72911654442567/AnsiballZ_file.py'
Jan 29 09:09:08 compute-0 sudo[60797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:09 compute-0 python3.9[60799]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:09 compute-0 sudo[60797]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:09 compute-0 sudo[60949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zztgstipkugkclfwsxcocojdijqocufz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677749.2638896-120-251743205403587/AnsiballZ_stat.py'
Jan 29 09:09:09 compute-0 sudo[60949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:09 compute-0 python3.9[60951]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:09:09 compute-0 sudo[60949]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:09 compute-0 sudo[61072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgbavvdibchubssbsauddrvvxiculzzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677749.2638896-120-251743205403587/AnsiballZ_copy.py'
Jan 29 09:09:09 compute-0 sudo[61072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:10 compute-0 python3.9[61074]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769677749.2638896-120-251743205403587/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:10 compute-0 sudo[61072]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:10 compute-0 sudo[61224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpfbigfdkiqkhzcsvuqqtcnuldiupmxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677750.2654898-135-87127769575461/AnsiballZ_stat.py'
Jan 29 09:09:10 compute-0 sudo[61224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:10 compute-0 python3.9[61226]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:09:10 compute-0 sudo[61224]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:10 compute-0 sudo[61347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oesbhudlyromwiyzqlzdjhkaqeheozfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677750.2654898-135-87127769575461/AnsiballZ_copy.py'
Jan 29 09:09:10 compute-0 sudo[61347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:11 compute-0 python3.9[61349]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769677750.2654898-135-87127769575461/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:11 compute-0 sudo[61347]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:11 compute-0 sudo[61499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weymtgqcelhxzrztwofgymuaemjlauga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677751.341117-150-102400413676646/AnsiballZ_systemd.py'
Jan 29 09:09:11 compute-0 sudo[61499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:12 compute-0 python3.9[61501]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:09:12 compute-0 systemd[1]: Reloading.
Jan 29 09:09:12 compute-0 systemd-rc-local-generator[61522]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:09:12 compute-0 systemd-sysv-generator[61525]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:09:12 compute-0 systemd[1]: Reloading.
Jan 29 09:09:12 compute-0 systemd-rc-local-generator[61556]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:09:12 compute-0 systemd-sysv-generator[61565]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:09:12 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 29 09:09:12 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 29 09:09:12 compute-0 sudo[61499]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:13 compute-0 sudo[61725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjqeylrhnjmrtrzxdcsfvsagjbmjwvzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677752.861943-158-76691629105413/AnsiballZ_stat.py'
Jan 29 09:09:13 compute-0 sudo[61725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:13 compute-0 python3.9[61727]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:09:13 compute-0 sudo[61725]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:13 compute-0 sudo[61848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcfmwndngmieuyozqnedacrgddwhwhso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677752.861943-158-76691629105413/AnsiballZ_copy.py'
Jan 29 09:09:13 compute-0 sudo[61848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:13 compute-0 python3.9[61850]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769677752.861943-158-76691629105413/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:13 compute-0 sudo[61848]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:14 compute-0 sudo[62000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbrdisdsvbqgkfycjgzezzoripuvawfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677753.9491582-173-127701364837518/AnsiballZ_stat.py'
Jan 29 09:09:14 compute-0 sudo[62000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:14 compute-0 python3.9[62002]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:09:14 compute-0 sudo[62000]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:14 compute-0 sudo[62123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwjalquxomcrbtukwjovwmvdpnkgzupd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677753.9491582-173-127701364837518/AnsiballZ_copy.py'
Jan 29 09:09:14 compute-0 sudo[62123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:14 compute-0 python3.9[62125]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769677753.9491582-173-127701364837518/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:14 compute-0 sudo[62123]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:15 compute-0 sudo[62275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptmzwnkooywuiaqocafrhatjzxtnijph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677754.952602-188-79792374271413/AnsiballZ_systemd.py'
Jan 29 09:09:15 compute-0 sudo[62275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:15 compute-0 python3.9[62277]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:09:15 compute-0 systemd[1]: Reloading.
Jan 29 09:09:15 compute-0 systemd-rc-local-generator[62303]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:09:15 compute-0 systemd-sysv-generator[62307]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:09:15 compute-0 systemd[1]: Reloading.
Jan 29 09:09:15 compute-0 systemd-rc-local-generator[62336]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:09:15 compute-0 systemd-sysv-generator[62343]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:09:15 compute-0 systemd[1]: Starting Create netns directory...
Jan 29 09:09:15 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 29 09:09:15 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 29 09:09:15 compute-0 systemd[1]: Finished Create netns directory.
Jan 29 09:09:15 compute-0 sudo[62275]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:16 compute-0 python3.9[62502]: ansible-ansible.builtin.service_facts Invoked
Jan 29 09:09:16 compute-0 network[62519]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 29 09:09:16 compute-0 network[62520]: 'network-scripts' will be removed from distribution in near future.
Jan 29 09:09:16 compute-0 network[62521]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 29 09:09:20 compute-0 sudo[62781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjzzpuoyutdawsrvblvdvmevqyuduppe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677759.946253-204-81445229610622/AnsiballZ_systemd.py'
Jan 29 09:09:20 compute-0 sudo[62781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:20 compute-0 python3.9[62783]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:09:20 compute-0 systemd[1]: Reloading.
Jan 29 09:09:20 compute-0 systemd-rc-local-generator[62804]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:09:20 compute-0 systemd-sysv-generator[62813]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:09:20 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 29 09:09:20 compute-0 iptables.init[62823]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 29 09:09:21 compute-0 iptables.init[62823]: iptables: Flushing firewall rules: [  OK  ]
Jan 29 09:09:21 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 29 09:09:21 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 29 09:09:21 compute-0 sudo[62781]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:21 compute-0 sudo[63017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idfbscksqgoetzkyzixfwxghnwggxzag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677761.2321444-204-149659761250283/AnsiballZ_systemd.py'
Jan 29 09:09:21 compute-0 sudo[63017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:21 compute-0 python3.9[63019]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:09:21 compute-0 sudo[63017]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:22 compute-0 sudo[63171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjrvvtsbggmoclvkdekilabeungvcyzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677762.011351-220-268929806711990/AnsiballZ_systemd.py'
Jan 29 09:09:22 compute-0 sudo[63171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:22 compute-0 python3.9[63173]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:09:22 compute-0 systemd[1]: Reloading.
Jan 29 09:09:22 compute-0 systemd-rc-local-generator[63202]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:09:22 compute-0 systemd-sysv-generator[63206]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:09:22 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 29 09:09:22 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 29 09:09:22 compute-0 sudo[63171]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:23 compute-0 sudo[63363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljvmghlnlujgqrwqlhscfkbbdsddycmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677763.0691159-228-262279265237841/AnsiballZ_command.py'
Jan 29 09:09:23 compute-0 sudo[63363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:23 compute-0 python3.9[63365]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:09:23 compute-0 sudo[63363]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:24 compute-0 sudo[63516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvsirdmrqopryhmrwmrxmoulocadlujg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677764.0088367-242-18533874766096/AnsiballZ_stat.py'
Jan 29 09:09:24 compute-0 sudo[63516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:24 compute-0 python3.9[63518]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:09:24 compute-0 sudo[63516]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:24 compute-0 sudo[63641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvpscqueohqxostwodzrphcmporkwssu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677764.0088367-242-18533874766096/AnsiballZ_copy.py'
Jan 29 09:09:24 compute-0 sudo[63641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:24 compute-0 python3.9[63643]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769677764.0088367-242-18533874766096/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:24 compute-0 sudo[63641]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:25 compute-0 sudo[63794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivuybihtrkmbuessfoyomblsphrmuzpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677765.123887-257-16465452838129/AnsiballZ_systemd.py'
Jan 29 09:09:25 compute-0 sudo[63794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:25 compute-0 python3.9[63796]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 09:09:25 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 29 09:09:25 compute-0 sshd[999]: Received SIGHUP; restarting.
Jan 29 09:09:25 compute-0 sshd[999]: Server listening on 0.0.0.0 port 22.
Jan 29 09:09:25 compute-0 sshd[999]: Server listening on :: port 22.
Jan 29 09:09:25 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 29 09:09:25 compute-0 sudo[63794]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:26 compute-0 sudo[63950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfbzqpgpuzjsjhpkgipkhftybzqogwql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677766.0223484-265-20829871270912/AnsiballZ_file.py'
Jan 29 09:09:26 compute-0 sudo[63950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:26 compute-0 python3.9[63952]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:26 compute-0 sudo[63950]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:26 compute-0 sudo[64102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obpnlvpzfjlribcholhhsrhutyfcaygc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677766.6278586-273-74537666738402/AnsiballZ_stat.py'
Jan 29 09:09:26 compute-0 sudo[64102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:27 compute-0 python3.9[64104]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:09:27 compute-0 sudo[64102]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:27 compute-0 sudo[64225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhgtcxneaayhchlkuezzhjmmjzgtolvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677766.6278586-273-74537666738402/AnsiballZ_copy.py'
Jan 29 09:09:27 compute-0 sudo[64225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:27 compute-0 python3.9[64227]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769677766.6278586-273-74537666738402/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:27 compute-0 sudo[64225]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:28 compute-0 sudo[64377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpqraphzjwqziiecdlisgpyofgbntsmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677767.8706505-291-58481365800454/AnsiballZ_timezone.py'
Jan 29 09:09:28 compute-0 sudo[64377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:28 compute-0 python3.9[64379]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 29 09:09:28 compute-0 systemd[1]: Starting Time & Date Service...
Jan 29 09:09:28 compute-0 systemd[1]: Started Time & Date Service.
Jan 29 09:09:28 compute-0 sudo[64377]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:29 compute-0 sudo[64533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flkadpgxrnubybfqhijzynfyfvjmjsjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677768.8332157-300-72838990949469/AnsiballZ_file.py'
Jan 29 09:09:29 compute-0 sudo[64533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:29 compute-0 python3.9[64535]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:29 compute-0 sudo[64533]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:29 compute-0 sudo[64685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwzlevevbjbvfatptsvredqdqbnclvir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677769.4094722-308-207049213110691/AnsiballZ_stat.py'
Jan 29 09:09:29 compute-0 sudo[64685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:29 compute-0 python3.9[64687]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:09:29 compute-0 sudo[64685]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:30 compute-0 sudo[64808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dktlxtnpdsiqwxyuqpvsqwoulkbahvxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677769.4094722-308-207049213110691/AnsiballZ_copy.py'
Jan 29 09:09:30 compute-0 sudo[64808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:30 compute-0 python3.9[64810]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769677769.4094722-308-207049213110691/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:30 compute-0 sudo[64808]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:30 compute-0 sudo[64960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qelaabovmdwibvekbvzdtwjsiudzpwcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677770.555683-323-29986073903243/AnsiballZ_stat.py'
Jan 29 09:09:30 compute-0 sudo[64960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:31 compute-0 python3.9[64962]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:09:31 compute-0 sudo[64960]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:31 compute-0 sudo[65083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhmjpeyqkrfjyfoxrcrleavvhfoutyhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677770.555683-323-29986073903243/AnsiballZ_copy.py'
Jan 29 09:09:31 compute-0 sudo[65083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:31 compute-0 python3.9[65085]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769677770.555683-323-29986073903243/.source.yaml _original_basename=.0culp3ba follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:31 compute-0 sudo[65083]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:31 compute-0 sudo[65235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yetxhtttrlvxnbqqkxevmvcbpgwufmgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677771.677151-338-154772820811375/AnsiballZ_stat.py'
Jan 29 09:09:31 compute-0 sudo[65235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:32 compute-0 python3.9[65237]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:09:32 compute-0 sudo[65235]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:32 compute-0 sudo[65358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwwcpkktczsvoudgoxqjctapsfagzqbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677771.677151-338-154772820811375/AnsiballZ_copy.py'
Jan 29 09:09:32 compute-0 sudo[65358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:32 compute-0 python3.9[65360]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769677771.677151-338-154772820811375/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:32 compute-0 sudo[65358]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:33 compute-0 sudo[65510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njfgqqmcjiioqbetbbkyjcxywzeeayrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677772.7993128-353-69544353894537/AnsiballZ_command.py'
Jan 29 09:09:33 compute-0 sudo[65510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:33 compute-0 python3.9[65512]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:09:33 compute-0 sudo[65510]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:33 compute-0 sudo[65663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kapodpnnsoattvjkzslcfcsucphqqxoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677773.3441918-361-99848712625504/AnsiballZ_command.py'
Jan 29 09:09:33 compute-0 sudo[65663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:33 compute-0 python3.9[65665]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:09:33 compute-0 sudo[65663]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:34 compute-0 sudo[65816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqewzwlsojcobzpexbegzcfuksynawfk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769677773.8576682-369-145120423671273/AnsiballZ_edpm_nftables_from_files.py'
Jan 29 09:09:34 compute-0 sudo[65816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:34 compute-0 python3[65818]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 29 09:09:34 compute-0 sudo[65816]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:34 compute-0 sudo[65968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofedweoogcdfphwstfuasrqteafpdcxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677774.597509-377-102391016276519/AnsiballZ_stat.py'
Jan 29 09:09:34 compute-0 sudo[65968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:35 compute-0 python3.9[65970]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:09:35 compute-0 sudo[65968]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:35 compute-0 sudo[66091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxjsdotsivgjoqbxbzqhzoxyskfstdfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677774.597509-377-102391016276519/AnsiballZ_copy.py'
Jan 29 09:09:35 compute-0 sudo[66091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:35 compute-0 python3.9[66093]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769677774.597509-377-102391016276519/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:35 compute-0 sudo[66091]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:35 compute-0 sudo[66243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unqeiifbazuoxuysxepvgsdgtdyfengi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677775.6821363-392-131771392676767/AnsiballZ_stat.py'
Jan 29 09:09:35 compute-0 sudo[66243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:36 compute-0 python3.9[66245]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:09:36 compute-0 sudo[66243]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:36 compute-0 sudo[66366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpwoyscteeehllwjzoxwngwvuqsxuwbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677775.6821363-392-131771392676767/AnsiballZ_copy.py'
Jan 29 09:09:36 compute-0 sudo[66366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:36 compute-0 python3.9[66368]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769677775.6821363-392-131771392676767/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:36 compute-0 sudo[66366]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:36 compute-0 sudo[66518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjkvkydaachvplgrrvjogmiznslplawc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677776.7017117-407-196031685890561/AnsiballZ_stat.py'
Jan 29 09:09:36 compute-0 sudo[66518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:37 compute-0 python3.9[66520]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:09:37 compute-0 sudo[66518]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:37 compute-0 sudo[66641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvciqtwvxjdjgfxxhjzhmwiedrlonxoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677776.7017117-407-196031685890561/AnsiballZ_copy.py'
Jan 29 09:09:37 compute-0 sudo[66641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:37 compute-0 python3.9[66643]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769677776.7017117-407-196031685890561/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:37 compute-0 sudo[66641]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:38 compute-0 sudo[66793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grkyjefwtgdigkknzzgvwhflvvkdjrtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677777.8132467-422-24842221569688/AnsiballZ_stat.py'
Jan 29 09:09:38 compute-0 sudo[66793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:38 compute-0 python3.9[66795]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:09:38 compute-0 sudo[66793]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:38 compute-0 sudo[66916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozhqstsgzycsfvsyipgafxztyhhbhqhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677777.8132467-422-24842221569688/AnsiballZ_copy.py'
Jan 29 09:09:38 compute-0 sudo[66916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:38 compute-0 python3.9[66918]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769677777.8132467-422-24842221569688/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:38 compute-0 sudo[66916]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:39 compute-0 sudo[67068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhfhbzfdtjcvyhxsdrcuevxbjswirdxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677778.9134476-437-235834607561687/AnsiballZ_stat.py'
Jan 29 09:09:39 compute-0 sudo[67068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:39 compute-0 python3.9[67070]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:09:39 compute-0 sudo[67068]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:39 compute-0 sudo[67191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqqagfvciouobemseydttbokvalpseiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677778.9134476-437-235834607561687/AnsiballZ_copy.py'
Jan 29 09:09:39 compute-0 sudo[67191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:39 compute-0 python3.9[67193]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769677778.9134476-437-235834607561687/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:39 compute-0 sudo[67191]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:40 compute-0 sudo[67343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mybkzmdncwlnlsjqjssymlonxfzxzfxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677780.1121676-452-121770773544873/AnsiballZ_file.py'
Jan 29 09:09:40 compute-0 sudo[67343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:40 compute-0 python3.9[67345]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:40 compute-0 sudo[67343]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:40 compute-0 sudo[67495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvkggzmztmamecgrpwfkmqpdwdqznjue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677780.7101877-460-240469559722912/AnsiballZ_command.py'
Jan 29 09:09:40 compute-0 sudo[67495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:41 compute-0 python3.9[67497]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:09:41 compute-0 sudo[67495]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:41 compute-0 sudo[67654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eocjmvjxtacuumjempnavresgmrfgkqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677781.5441825-468-143503275136112/AnsiballZ_blockinfile.py'
Jan 29 09:09:41 compute-0 sudo[67654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:42 compute-0 python3.9[67656]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:42 compute-0 sudo[67654]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:42 compute-0 sudo[67807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sveogltapovibwjvchhswmydtygecfde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677782.423291-477-128089790699989/AnsiballZ_file.py'
Jan 29 09:09:42 compute-0 sudo[67807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:42 compute-0 python3.9[67809]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:43 compute-0 sudo[67807]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:43 compute-0 sudo[67959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuwjhnbtylxbcbkqughsyuqjcfosjkjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677783.137246-477-33768666007883/AnsiballZ_file.py'
Jan 29 09:09:43 compute-0 sudo[67959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:43 compute-0 python3.9[67961]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:43 compute-0 sudo[67959]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:44 compute-0 sudo[68111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvcgdpaapzecdyltystopezzsmpuykkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677783.753473-492-29052191266524/AnsiballZ_mount.py'
Jan 29 09:09:44 compute-0 sudo[68111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:44 compute-0 python3.9[68113]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 29 09:09:44 compute-0 sudo[68111]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:45 compute-0 sudo[68264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgavmusopwoqkjsjlpsphgonslzyomom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677784.8961945-492-54857743279225/AnsiballZ_mount.py'
Jan 29 09:09:45 compute-0 sudo[68264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:45 compute-0 python3.9[68266]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 29 09:09:45 compute-0 sudo[68264]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:45 compute-0 sshd-session[59066]: Connection closed by 192.168.122.30 port 39332
Jan 29 09:09:45 compute-0 sshd-session[59063]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:09:45 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 29 09:09:45 compute-0 systemd[1]: session-13.scope: Consumed 30.201s CPU time.
Jan 29 09:09:45 compute-0 systemd-logind[799]: Session 13 logged out. Waiting for processes to exit.
Jan 29 09:09:45 compute-0 systemd-logind[799]: Removed session 13.
Jan 29 09:09:51 compute-0 sshd-session[68292]: Accepted publickey for zuul from 192.168.122.30 port 43956 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:09:51 compute-0 systemd-logind[799]: New session 14 of user zuul.
Jan 29 09:09:51 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 29 09:09:51 compute-0 sshd-session[68292]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:09:51 compute-0 sudo[68445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmyihdmwxgqiecxrdbtlaoobkjcwpdoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677791.1720655-16-182747348471280/AnsiballZ_tempfile.py'
Jan 29 09:09:51 compute-0 sudo[68445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:51 compute-0 python3.9[68447]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 29 09:09:51 compute-0 sudo[68445]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:52 compute-0 sudo[68597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpbtjpkfmxdqcvhknrmlawdezjsrlhav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677792.0085788-28-239551709999735/AnsiballZ_stat.py'
Jan 29 09:09:52 compute-0 sudo[68597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:52 compute-0 python3.9[68599]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:09:52 compute-0 sudo[68597]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:53 compute-0 sudo[68749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyccbhbpsdfbkqodqtygkjcujasvmfwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677792.8409662-38-189113160651531/AnsiballZ_setup.py'
Jan 29 09:09:53 compute-0 sudo[68749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:53 compute-0 python3.9[68751]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:09:53 compute-0 sudo[68749]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:54 compute-0 sudo[68901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibzswdkbowoovfefoqinhvirccbwfqcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677793.9274325-47-236933960929916/AnsiballZ_blockinfile.py'
Jan 29 09:09:54 compute-0 sudo[68901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:54 compute-0 python3.9[68903]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjuxZIJ4X8jT6trr5CartPX+xSyv6a7KuzJBzvtqnlScCyccSTh2hIF/m/mxwqVM6xI10XvoL6FyLNojFtf+FnVMhM9rRwoM/m2Gk/dDGgxWGxqndd7e54BNHzwcErzCLYORDFNcMVFLfvlJjglTHabcYqcQ7D34yPyBImv2JZkIXcPKxlA0dSe92bqOt8Srjqd7eTDHrvD8Ucs09i0t3TrSIg2fzwxWs38gnD8rHvgibq1nm1pYFZFAVVpUWDbxqB1GogN1jls44gwyptQbvRRzW/8qslugFinSADjrdhgV9BN9TCkO/Fiae7Kw1ME3xFCYrgHDyEdHjo4SFt32SAMDeg5XBOP+2FoXB3YV3RUa8ctzxaAobE1LPb1hPsluGuQ180BCJYiou6hXDOw6VwSjI59Xd9PPV6voHtOV3hijs0tMHTigimaqnacysTk9yWeU4ZVosAQT2FMWZv6shG6zbGZewCLD7jGfDrdzdyxBquJ7GN/N5t+KjtqFI3Rr8=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINQy3RXTmDzw+Eiotj8TUZIiot4Z9D7DKW79i5sp1sRr
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCahJ3hcjzChG5NXhUimUXwcSbVxmfQH1zORvmedrE8Hzpp1mYh+ZP4/SqeWvSb00XQFfZNxpUdcWKLt9leH/n8=
                                             create=True mode=0644 path=/tmp/ansible.fn5v1ouw state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:54 compute-0 sudo[68901]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:55 compute-0 sudo[69053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhivfhqnpggmeajmpuscxaomgufquqid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677794.7015433-55-163842214305817/AnsiballZ_command.py'
Jan 29 09:09:55 compute-0 sudo[69053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:55 compute-0 python3.9[69055]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.fn5v1ouw' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:09:55 compute-0 sudo[69053]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:55 compute-0 sudo[69207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eamafjziipevxhfponvriccttrdvphlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677795.4884093-63-189188142327954/AnsiballZ_file.py'
Jan 29 09:09:55 compute-0 sudo[69207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:09:56 compute-0 python3.9[69209]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.fn5v1ouw state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:09:56 compute-0 sudo[69207]: pam_unix(sudo:session): session closed for user root
Jan 29 09:09:56 compute-0 sshd-session[68295]: Connection closed by 192.168.122.30 port 43956
Jan 29 09:09:56 compute-0 sshd-session[68292]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:09:56 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 29 09:09:56 compute-0 systemd[1]: session-14.scope: Consumed 3.181s CPU time.
Jan 29 09:09:56 compute-0 systemd-logind[799]: Session 14 logged out. Waiting for processes to exit.
Jan 29 09:09:56 compute-0 systemd-logind[799]: Removed session 14.
Jan 29 09:09:58 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 29 09:10:01 compute-0 sshd-session[69236]: Accepted publickey for zuul from 192.168.122.30 port 59144 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:10:01 compute-0 systemd-logind[799]: New session 15 of user zuul.
Jan 29 09:10:01 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 29 09:10:01 compute-0 sshd-session[69236]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:10:02 compute-0 python3.9[69389]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:10:03 compute-0 sudo[69543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orztvokjyuazuiweqnmykuzkpknocymy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677803.268443-27-202386455239299/AnsiballZ_systemd.py'
Jan 29 09:10:03 compute-0 sudo[69543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:04 compute-0 python3.9[69545]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 29 09:10:04 compute-0 sudo[69543]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:04 compute-0 sudo[69697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkvzyaetmkbraypfazcwigekdphcrssy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677804.4088724-35-253996778248940/AnsiballZ_systemd.py'
Jan 29 09:10:04 compute-0 sudo[69697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:05 compute-0 python3.9[69699]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 09:10:05 compute-0 sudo[69697]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:05 compute-0 sudo[69850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leewwuduivugnckxernerhchwxstcwdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677805.3649547-44-138516616621518/AnsiballZ_command.py'
Jan 29 09:10:05 compute-0 sudo[69850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:05 compute-0 python3.9[69852]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:10:05 compute-0 sudo[69850]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:06 compute-0 sudo[70003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mphwmdzptsjiyxgtqejlovczxuxvqkjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677806.120065-52-177025442480092/AnsiballZ_stat.py'
Jan 29 09:10:06 compute-0 sudo[70003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:06 compute-0 python3.9[70005]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:10:06 compute-0 sudo[70003]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:07 compute-0 sudo[70157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbubsajqskpyiwznzifoadcubdbkuphj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677806.8875563-60-192918396733779/AnsiballZ_command.py'
Jan 29 09:10:07 compute-0 sudo[70157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:07 compute-0 python3.9[70159]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:10:07 compute-0 sudo[70157]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:08 compute-0 sudo[70312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zssyqebmaqrfngvmbebihvvjjhqzuufs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677807.5510423-68-262950700482057/AnsiballZ_file.py'
Jan 29 09:10:08 compute-0 sudo[70312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:08 compute-0 python3.9[70314]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:10:08 compute-0 sudo[70312]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:08 compute-0 sshd-session[69239]: Connection closed by 192.168.122.30 port 59144
Jan 29 09:10:08 compute-0 sshd-session[69236]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:10:08 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 29 09:10:08 compute-0 systemd[1]: session-15.scope: Consumed 4.229s CPU time.
Jan 29 09:10:08 compute-0 systemd-logind[799]: Session 15 logged out. Waiting for processes to exit.
Jan 29 09:10:08 compute-0 systemd-logind[799]: Removed session 15.
Jan 29 09:10:13 compute-0 sshd-session[70339]: Accepted publickey for zuul from 192.168.122.30 port 50034 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:10:13 compute-0 systemd-logind[799]: New session 16 of user zuul.
Jan 29 09:10:13 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 29 09:10:13 compute-0 sshd-session[70339]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:10:15 compute-0 python3.9[70492]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:10:15 compute-0 sudo[70646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jivapjzsrpmyivcrwvjrzecrdadykoei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677815.4295802-29-62411565239293/AnsiballZ_setup.py'
Jan 29 09:10:15 compute-0 sudo[70646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:15 compute-0 python3.9[70648]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 09:10:16 compute-0 sudo[70646]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:16 compute-0 sudo[70730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfoxatsqlamtuofbppcavntcpnkhmvfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769677815.4295802-29-62411565239293/AnsiballZ_dnf.py'
Jan 29 09:10:16 compute-0 sudo[70730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:16 compute-0 python3.9[70732]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 29 09:10:18 compute-0 sudo[70730]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:18 compute-0 python3.9[70883]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:10:20 compute-0 python3.9[71034]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 29 09:10:20 compute-0 python3.9[71184]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:10:20 compute-0 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 09:10:20 compute-0 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 09:10:21 compute-0 python3.9[71335]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:10:22 compute-0 sshd-session[70342]: Connection closed by 192.168.122.30 port 50034
Jan 29 09:10:22 compute-0 sshd-session[70339]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:10:22 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 29 09:10:22 compute-0 systemd[1]: session-16.scope: Consumed 5.655s CPU time.
Jan 29 09:10:22 compute-0 systemd-logind[799]: Session 16 logged out. Waiting for processes to exit.
Jan 29 09:10:22 compute-0 systemd-logind[799]: Removed session 16.
Jan 29 09:10:29 compute-0 sshd-session[71360]: Accepted publickey for zuul from 38.129.56.236 port 33270 ssh2: RSA SHA256:UVFwpB4pGBKhI2DrodtDDM9jvfvTiEMRDyxyOHUhUhI
Jan 29 09:10:29 compute-0 systemd-logind[799]: New session 17 of user zuul.
Jan 29 09:10:29 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 29 09:10:29 compute-0 sshd-session[71360]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:10:29 compute-0 sudo[71436]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnturavtzegsnewxjksqbjraxehlpscc ; /usr/bin/python3'
Jan 29 09:10:29 compute-0 sudo[71436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:30 compute-0 useradd[71440]: new group: name=ceph-admin, GID=42478
Jan 29 09:10:30 compute-0 useradd[71440]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Jan 29 09:10:30 compute-0 sudo[71436]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:30 compute-0 sudo[71522]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bovhgwabmbhlhxwpjimixijsrrwkyubx ; /usr/bin/python3'
Jan 29 09:10:30 compute-0 sudo[71522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:31 compute-0 sudo[71522]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:31 compute-0 sudo[71595]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqsolxomymbauxiaiuckyasbyoqpfioh ; /usr/bin/python3'
Jan 29 09:10:31 compute-0 sudo[71595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:31 compute-0 sudo[71595]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:31 compute-0 sudo[71645]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skfmanoowsgswrwmgqbweiqgoicphfvb ; /usr/bin/python3'
Jan 29 09:10:31 compute-0 sudo[71645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:32 compute-0 sudo[71645]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:32 compute-0 sudo[71671]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vikmrfffquigkqtldetrqqwpsgeiwsew ; /usr/bin/python3'
Jan 29 09:10:32 compute-0 sudo[71671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:32 compute-0 sudo[71671]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:32 compute-0 sudo[71697]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjcxiwomebbdcrfxixhehgyzvpzqgsvv ; /usr/bin/python3'
Jan 29 09:10:32 compute-0 sudo[71697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:32 compute-0 sudo[71697]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:33 compute-0 sudo[71723]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awsqwaiipyytaodalxjcfgvysxijvkgw ; /usr/bin/python3'
Jan 29 09:10:33 compute-0 sudo[71723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:33 compute-0 sudo[71723]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:33 compute-0 sudo[71801]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdznaqyczsfwcgyhodwanpfkmsatzkbf ; /usr/bin/python3'
Jan 29 09:10:33 compute-0 sudo[71801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:33 compute-0 sudo[71801]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:33 compute-0 sudo[71874]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvulnllgpkesqoyxtsutczlsjiunomee ; /usr/bin/python3'
Jan 29 09:10:33 compute-0 sudo[71874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:33 compute-0 sudo[71874]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:34 compute-0 sudo[71976]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpqmrmuftfkjxaifdqhxyttnidsyscaf ; /usr/bin/python3'
Jan 29 09:10:34 compute-0 sudo[71976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:34 compute-0 sudo[71976]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:34 compute-0 sudo[72049]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbwdwcyyoregylduiftwwvkdfvfzawyw ; /usr/bin/python3'
Jan 29 09:10:34 compute-0 sudo[72049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:34 compute-0 sudo[72049]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:35 compute-0 sudo[72099]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvqobuurqqivdawaygtbdroueydribev ; /usr/bin/python3'
Jan 29 09:10:35 compute-0 sudo[72099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:35 compute-0 python3[72101]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:10:36 compute-0 sudo[72099]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:36 compute-0 sudo[72194]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckmcbkzcgnvhcgdpvorvxbzrdtkwepaj ; /usr/bin/python3'
Jan 29 09:10:36 compute-0 sudo[72194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:36 compute-0 python3[72196]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 29 09:10:38 compute-0 sudo[72194]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:38 compute-0 sudo[72221]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixbeyordzuiokneatyijmdasxmpsoufj ; /usr/bin/python3'
Jan 29 09:10:38 compute-0 sudo[72221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:38 compute-0 python3[72223]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 29 09:10:38 compute-0 sudo[72221]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:38 compute-0 sudo[72247]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njntbkjfkdyanqnwamupijdzqakxjwne ; /usr/bin/python3'
Jan 29 09:10:38 compute-0 sudo[72247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:38 compute-0 python3[72249]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:10:38 compute-0 kernel: loop: module loaded
Jan 29 09:10:38 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Jan 29 09:10:38 compute-0 sudo[72247]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:39 compute-0 sudo[72282]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgzrbmgwjtlbppbtsxbggqewwsmlbhxh ; /usr/bin/python3'
Jan 29 09:10:39 compute-0 sudo[72282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:39 compute-0 python3[72284]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:10:39 compute-0 lvm[72287]: PV /dev/loop3 not used.
Jan 29 09:10:39 compute-0 lvm[72296]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:10:39 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 29 09:10:39 compute-0 lvm[72298]:   1 logical volume(s) in volume group "ceph_vg0" now active
Jan 29 09:10:39 compute-0 sudo[72282]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:39 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 29 09:10:39 compute-0 sudo[72374]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbbqocsrlmxazopxrwmggkrdjymimzpj ; /usr/bin/python3'
Jan 29 09:10:39 compute-0 sudo[72374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:39 compute-0 python3[72376]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 09:10:39 compute-0 sudo[72374]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:40 compute-0 sudo[72447]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwpxybrrbkqovqrmvxiaflkncfztascl ; /usr/bin/python3'
Jan 29 09:10:40 compute-0 sudo[72447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:40 compute-0 python3[72449]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769677839.582617-36376-72549753429059/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:10:40 compute-0 sudo[72447]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:40 compute-0 sudo[72497]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elgadoemqmyrjxmtpeuaytudpifytgru ; /usr/bin/python3'
Jan 29 09:10:40 compute-0 sudo[72497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:40 compute-0 python3[72499]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:10:40 compute-0 systemd[1]: Reloading.
Jan 29 09:10:41 compute-0 systemd-sysv-generator[72532]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:10:41 compute-0 systemd-rc-local-generator[72529]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:10:41 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 29 09:10:41 compute-0 bash[72539]: /dev/loop3: [64513]:4329562 (/var/lib/ceph-osd-0.img)
Jan 29 09:10:41 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 29 09:10:41 compute-0 lvm[72540]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:10:41 compute-0 lvm[72540]: VG ceph_vg0 finished
Jan 29 09:10:41 compute-0 sudo[72497]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:41 compute-0 sudo[72564]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmfkvdkuapxkocvrxdzyqzakbblsxkxy ; /usr/bin/python3'
Jan 29 09:10:41 compute-0 sudo[72564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:41 compute-0 python3[72566]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 29 09:10:43 compute-0 sudo[72564]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:43 compute-0 sudo[72591]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umfyiaqvxykablfoemwkikmfvirectct ; /usr/bin/python3'
Jan 29 09:10:43 compute-0 sudo[72591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:43 compute-0 python3[72593]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 29 09:10:43 compute-0 sudo[72591]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:43 compute-0 sudo[72617]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foapisttrodcwiuxzwcbzfhujdlffrbd ; /usr/bin/python3'
Jan 29 09:10:43 compute-0 sudo[72617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:43 compute-0 python3[72619]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:10:43 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Jan 29 09:10:43 compute-0 sudo[72617]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:43 compute-0 sudo[72649]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgsslyazvsulrnixqbefdayzfzyihxxx ; /usr/bin/python3'
Jan 29 09:10:43 compute-0 sudo[72649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:44 compute-0 python3[72651]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:10:44 compute-0 lvm[72654]: PV /dev/loop4 not used.
Jan 29 09:10:44 compute-0 lvm[72656]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:10:44 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Jan 29 09:10:44 compute-0 lvm[72667]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:10:44 compute-0 lvm[72667]: VG ceph_vg1 finished
Jan 29 09:10:44 compute-0 lvm[72664]:   1 logical volume(s) in volume group "ceph_vg1" now active
Jan 29 09:10:44 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Jan 29 09:10:44 compute-0 sudo[72649]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:44 compute-0 sudo[72743]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ashwgxjvkussawhilmibimyaxqifcnpf ; /usr/bin/python3'
Jan 29 09:10:44 compute-0 sudo[72743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:44 compute-0 python3[72745]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 09:10:44 compute-0 sudo[72743]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:45 compute-0 sudo[72816]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgwaochnrzmqtuimtsifkhmdslrjqhij ; /usr/bin/python3'
Jan 29 09:10:45 compute-0 sudo[72816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:45 compute-0 python3[72818]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769677844.5703778-36403-99171119152256/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:10:45 compute-0 sudo[72816]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:45 compute-0 sudo[72866]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyiywocmlinbkdwjpdbdkfoziyjpdfhu ; /usr/bin/python3'
Jan 29 09:10:45 compute-0 sudo[72866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:45 compute-0 python3[72868]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:10:45 compute-0 systemd[1]: Reloading.
Jan 29 09:10:45 compute-0 systemd-sysv-generator[72902]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:10:45 compute-0 systemd-rc-local-generator[72899]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:10:45 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 29 09:10:45 compute-0 bash[72909]: /dev/loop4: [64513]:4329569 (/var/lib/ceph-osd-1.img)
Jan 29 09:10:45 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 29 09:10:45 compute-0 lvm[72910]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:10:45 compute-0 lvm[72910]: VG ceph_vg1 finished
Jan 29 09:10:45 compute-0 sudo[72866]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:46 compute-0 sudo[72934]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-towpfggjaqsgyljmyoahlghoaaktdkep ; /usr/bin/python3'
Jan 29 09:10:46 compute-0 sudo[72934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:46 compute-0 python3[72936]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 29 09:10:47 compute-0 sudo[72934]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:47 compute-0 sudo[72961]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeictbrmleouykimxqrmkflsrcvwxuig ; /usr/bin/python3'
Jan 29 09:10:47 compute-0 sudo[72961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:48 compute-0 python3[72963]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 29 09:10:48 compute-0 sudo[72961]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:48 compute-0 sudo[72987]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ublaoicfhvnvydoducvvftegimnmtcji ; /usr/bin/python3'
Jan 29 09:10:48 compute-0 sudo[72987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:48 compute-0 python3[72989]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:10:48 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Jan 29 09:10:48 compute-0 sudo[72987]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:48 compute-0 sudo[73019]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eswpycgtqdvfoueptxfrubvunhfsuwhs ; /usr/bin/python3'
Jan 29 09:10:48 compute-0 sudo[73019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:48 compute-0 python3[73021]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:10:48 compute-0 lvm[73024]: PV /dev/loop5 not used.
Jan 29 09:10:48 compute-0 lvm[73026]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:10:48 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Jan 29 09:10:49 compute-0 lvm[73029]:   1 logical volume(s) in volume group "ceph_vg2" now active
Jan 29 09:10:49 compute-0 lvm[73036]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:10:49 compute-0 lvm[73036]: VG ceph_vg2 finished
Jan 29 09:10:49 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Jan 29 09:10:49 compute-0 sudo[73019]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:49 compute-0 sudo[73112]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjlymjreukirqsoqwpmrttbynxmltslb ; /usr/bin/python3'
Jan 29 09:10:49 compute-0 sudo[73112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:49 compute-0 python3[73114]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 09:10:49 compute-0 sudo[73112]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:49 compute-0 sudo[73185]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dayqnxmnsqsqwjtmwcqhlfghxbkhnbxx ; /usr/bin/python3'
Jan 29 09:10:49 compute-0 sudo[73185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:49 compute-0 python3[73187]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769677849.35749-36430-86799986218324/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:10:49 compute-0 sudo[73185]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:50 compute-0 sudo[73235]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkeowvmfebmbafjmfaezkznjtcycddge ; /usr/bin/python3'
Jan 29 09:10:50 compute-0 sudo[73235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:50 compute-0 python3[73237]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:10:50 compute-0 systemd[1]: Reloading.
Jan 29 09:10:50 compute-0 systemd-rc-local-generator[73258]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:10:50 compute-0 systemd-sysv-generator[73267]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:10:50 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 29 09:10:50 compute-0 bash[73277]: /dev/loop5: [64513]:4355718 (/var/lib/ceph-osd-2.img)
Jan 29 09:10:50 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 29 09:10:50 compute-0 lvm[73278]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:10:50 compute-0 lvm[73278]: VG ceph_vg2 finished
Jan 29 09:10:50 compute-0 sudo[73235]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:52 compute-0 python3[73302]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:10:54 compute-0 sudo[73393]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltctvttcwuvtyvdbddoitebnkhhmjkjq ; /usr/bin/python3'
Jan 29 09:10:54 compute-0 sudo[73393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:54 compute-0 python3[73395]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 29 09:10:56 compute-0 chronyd[58582]: Selected source 207.34.48.31 (pool.ntp.org)
Jan 29 09:10:56 compute-0 sudo[73393]: pam_unix(sudo:session): session closed for user root
Jan 29 09:10:56 compute-0 sudo[73451]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afthvnzkudsscqjcvovwaakbbijkmkec ; /usr/bin/python3'
Jan 29 09:10:56 compute-0 sudo[73451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:10:57 compute-0 python3[73453]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 29 09:11:00 compute-0 groupadd[73463]: group added to /etc/group: name=cephadm, GID=993
Jan 29 09:11:00 compute-0 groupadd[73463]: group added to /etc/gshadow: name=cephadm
Jan 29 09:11:00 compute-0 groupadd[73463]: new group: name=cephadm, GID=993
Jan 29 09:11:00 compute-0 useradd[73470]: new user: name=cephadm, UID=992, GID=993, home=/var/lib/cephadm, shell=/bin/bash, from=none
Jan 29 09:11:00 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 29 09:11:00 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 29 09:11:00 compute-0 sudo[73451]: pam_unix(sudo:session): session closed for user root
Jan 29 09:11:01 compute-0 sudo[73569]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skmqmlxodfdcumwipxaqgmhksfnjztbv ; /usr/bin/python3'
Jan 29 09:11:01 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 29 09:11:01 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 29 09:11:01 compute-0 sudo[73569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:11:01 compute-0 systemd[1]: run-r21df360005324aad8c62904584d4d1ea.service: Deactivated successfully.
Jan 29 09:11:01 compute-0 python3[73572]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 29 09:11:01 compute-0 sudo[73569]: pam_unix(sudo:session): session closed for user root
Jan 29 09:11:01 compute-0 sudo[73598]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvbfcjigbfnagcjvgqkkdpechswoylfr ; /usr/bin/python3'
Jan 29 09:11:01 compute-0 sudo[73598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:11:01 compute-0 python3[73600]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:11:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 09:11:01 compute-0 sudo[73598]: pam_unix(sudo:session): session closed for user root
Jan 29 09:11:02 compute-0 sudo[73637]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oflfehdhijstodwcdfzohslvacbwhyni ; /usr/bin/python3'
Jan 29 09:11:02 compute-0 sudo[73637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:11:02 compute-0 python3[73639]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:11:02 compute-0 sudo[73637]: pam_unix(sudo:session): session closed for user root
Jan 29 09:11:02 compute-0 sudo[73663]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzgopqfamedlutrayfggslxrxljkjrfm ; /usr/bin/python3'
Jan 29 09:11:02 compute-0 sudo[73663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:11:02 compute-0 python3[73665]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:11:02 compute-0 sudo[73663]: pam_unix(sudo:session): session closed for user root
Jan 29 09:11:03 compute-0 sudo[73741]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbmflngnfvqgiatlgocpoaspaacxbmbi ; /usr/bin/python3'
Jan 29 09:11:03 compute-0 sudo[73741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:11:03 compute-0 python3[73743]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 09:11:03 compute-0 sudo[73741]: pam_unix(sudo:session): session closed for user root
Jan 29 09:11:03 compute-0 sudo[73814]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kidetirbjifcjkesbnbzavjidyltclnc ; /usr/bin/python3'
Jan 29 09:11:03 compute-0 sudo[73814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:11:03 compute-0 python3[73816]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769677863.0603454-36578-166464692309201/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:11:03 compute-0 sudo[73814]: pam_unix(sudo:session): session closed for user root
Jan 29 09:11:04 compute-0 sudo[73916]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqqqgpuqcusyjztfqavdkucuzxygenih ; /usr/bin/python3'
Jan 29 09:11:04 compute-0 sudo[73916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:11:04 compute-0 python3[73918]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 09:11:04 compute-0 sudo[73916]: pam_unix(sudo:session): session closed for user root
Jan 29 09:11:04 compute-0 sudo[73989]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmlffhhehovzbtjcgjxmgcyjorvdzhhu ; /usr/bin/python3'
Jan 29 09:11:04 compute-0 sudo[73989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:11:04 compute-0 python3[73991]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769677864.0475092-36596-268698229647114/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:11:04 compute-0 sudo[73989]: pam_unix(sudo:session): session closed for user root
Jan 29 09:11:04 compute-0 sudo[74039]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twjfaudqriafhlylwmawnmjftyplwhyq ; /usr/bin/python3'
Jan 29 09:11:04 compute-0 sudo[74039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:11:05 compute-0 python3[74041]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 29 09:11:05 compute-0 sudo[74039]: pam_unix(sudo:session): session closed for user root
Jan 29 09:11:05 compute-0 sudo[74067]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqcospwiwywigbsxemedmprvemzqzqxv ; /usr/bin/python3'
Jan 29 09:11:05 compute-0 sudo[74067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:11:05 compute-0 python3[74069]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 29 09:11:05 compute-0 sudo[74067]: pam_unix(sudo:session): session closed for user root
Jan 29 09:11:05 compute-0 sudo[74095]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkjlesmzgyamfbtylgcuoydzdjvzyttt ; /usr/bin/python3'
Jan 29 09:11:05 compute-0 sudo[74095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:11:05 compute-0 python3[74097]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 29 09:11:05 compute-0 sudo[74095]: pam_unix(sudo:session): session closed for user root
Jan 29 09:11:05 compute-0 python3[74123]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 29 09:11:06 compute-0 sudo[74147]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrxtiyiryvgyjgtgppgislictmgnzjum ; /usr/bin/python3'
Jan 29 09:11:06 compute-0 sudo[74147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:11:06 compute-0 python3[74149]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:11:06 compute-0 sshd-session[74153]: Accepted publickey for ceph-admin from 192.168.122.100 port 45796 ssh2: RSA SHA256:TE3qk6UeXQNixiVnNk32+m51NOuCaiLmMMgUlYFkcfA
Jan 29 09:11:06 compute-0 systemd-logind[799]: New session 18 of user ceph-admin.
Jan 29 09:11:06 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 29 09:11:06 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 29 09:11:06 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 29 09:11:06 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 29 09:11:06 compute-0 systemd[74157]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 29 09:11:06 compute-0 systemd[74157]: Queued start job for default target Main User Target.
Jan 29 09:11:06 compute-0 systemd[74157]: Created slice User Application Slice.
Jan 29 09:11:06 compute-0 systemd[74157]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 29 09:11:06 compute-0 systemd[74157]: Started Daily Cleanup of User's Temporary Directories.
Jan 29 09:11:06 compute-0 systemd[74157]: Reached target Paths.
Jan 29 09:11:06 compute-0 systemd[74157]: Reached target Timers.
Jan 29 09:11:06 compute-0 systemd[74157]: Starting D-Bus User Message Bus Socket...
Jan 29 09:11:06 compute-0 systemd[74157]: Starting Create User's Volatile Files and Directories...
Jan 29 09:11:06 compute-0 systemd[74157]: Finished Create User's Volatile Files and Directories.
Jan 29 09:11:06 compute-0 systemd[74157]: Listening on D-Bus User Message Bus Socket.
Jan 29 09:11:06 compute-0 systemd[74157]: Reached target Sockets.
Jan 29 09:11:06 compute-0 systemd[74157]: Reached target Basic System.
Jan 29 09:11:06 compute-0 systemd[74157]: Reached target Main User Target.
Jan 29 09:11:06 compute-0 systemd[74157]: Startup finished in 120ms.
Jan 29 09:11:06 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 29 09:11:06 compute-0 systemd[1]: Started Session 18 of User ceph-admin.
Jan 29 09:11:06 compute-0 sshd-session[74153]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 29 09:11:06 compute-0 sudo[74173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Jan 29 09:11:06 compute-0 sudo[74173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:11:06 compute-0 sudo[74173]: pam_unix(sudo:session): session closed for user root
Jan 29 09:11:06 compute-0 sshd-session[74172]: Received disconnect from 192.168.122.100 port 45796:11: disconnected by user
Jan 29 09:11:06 compute-0 sshd-session[74172]: Disconnected from user ceph-admin 192.168.122.100 port 45796
Jan 29 09:11:06 compute-0 sshd-session[74153]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 29 09:11:06 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 29 09:11:06 compute-0 systemd-logind[799]: Session 18 logged out. Waiting for processes to exit.
Jan 29 09:11:06 compute-0 systemd-logind[799]: Removed session 18.
Jan 29 09:11:06 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 09:11:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 09:11:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat336561561-lower\x2dmapped.mount: Deactivated successfully.
Jan 29 09:11:17 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Jan 29 09:11:17 compute-0 systemd[74157]: Activating special unit Exit the Session...
Jan 29 09:11:17 compute-0 systemd[74157]: Stopped target Main User Target.
Jan 29 09:11:17 compute-0 systemd[74157]: Stopped target Basic System.
Jan 29 09:11:17 compute-0 systemd[74157]: Stopped target Paths.
Jan 29 09:11:17 compute-0 systemd[74157]: Stopped target Sockets.
Jan 29 09:11:17 compute-0 systemd[74157]: Stopped target Timers.
Jan 29 09:11:17 compute-0 systemd[74157]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 29 09:11:17 compute-0 systemd[74157]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 29 09:11:17 compute-0 systemd[74157]: Closed D-Bus User Message Bus Socket.
Jan 29 09:11:17 compute-0 systemd[74157]: Stopped Create User's Volatile Files and Directories.
Jan 29 09:11:17 compute-0 systemd[74157]: Removed slice User Application Slice.
Jan 29 09:11:17 compute-0 systemd[74157]: Reached target Shutdown.
Jan 29 09:11:17 compute-0 systemd[74157]: Finished Exit the Session.
Jan 29 09:11:17 compute-0 systemd[74157]: Reached target Exit the Session.
Jan 29 09:11:17 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Jan 29 09:11:17 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Jan 29 09:11:17 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 29 09:11:17 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 29 09:11:17 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 29 09:11:17 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 29 09:11:17 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Jan 29 09:11:33 compute-0 podman[74251]: 2026-01-29 09:11:33.090540093 +0000 UTC m=+25.953451127 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 09:11:33 compute-0 podman[74315]: 2026-01-29 09:11:33.154530203 +0000 UTC m=+0.043442529 container create ce95052acf06ce8042953641b21bed0bf2c3b1a712f4226ec0d297ef797b360b (image=quay.io/ceph/ceph:v20, name=zen_noether, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Jan 29 09:11:33 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 29 09:11:33 compute-0 systemd[1]: Started libpod-conmon-ce95052acf06ce8042953641b21bed0bf2c3b1a712f4226ec0d297ef797b360b.scope.
Jan 29 09:11:33 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:33 compute-0 podman[74315]: 2026-01-29 09:11:33.134113194 +0000 UTC m=+0.023025540 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:33 compute-0 podman[74315]: 2026-01-29 09:11:33.244351038 +0000 UTC m=+0.133263394 container init ce95052acf06ce8042953641b21bed0bf2c3b1a712f4226ec0d297ef797b360b (image=quay.io/ceph/ceph:v20, name=zen_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:11:33 compute-0 podman[74315]: 2026-01-29 09:11:33.252333603 +0000 UTC m=+0.141245929 container start ce95052acf06ce8042953641b21bed0bf2c3b1a712f4226ec0d297ef797b360b (image=quay.io/ceph/ceph:v20, name=zen_noether, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 29 09:11:33 compute-0 podman[74315]: 2026-01-29 09:11:33.256394222 +0000 UTC m=+0.145306688 container attach ce95052acf06ce8042953641b21bed0bf2c3b1a712f4226ec0d297ef797b360b (image=quay.io/ceph/ceph:v20, name=zen_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:11:33 compute-0 zen_noether[74331]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 29 09:11:33 compute-0 systemd[1]: libpod-ce95052acf06ce8042953641b21bed0bf2c3b1a712f4226ec0d297ef797b360b.scope: Deactivated successfully.
Jan 29 09:11:33 compute-0 podman[74315]: 2026-01-29 09:11:33.345426995 +0000 UTC m=+0.234339341 container died ce95052acf06ce8042953641b21bed0bf2c3b1a712f4226ec0d297ef797b360b (image=quay.io/ceph/ceph:v20, name=zen_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:11:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2def607d79c42f82bdaaa9f83916512821e78f30111c6bceb73c71e3dbae148e-merged.mount: Deactivated successfully.
Jan 29 09:11:33 compute-0 podman[74315]: 2026-01-29 09:11:33.382982045 +0000 UTC m=+0.271894371 container remove ce95052acf06ce8042953641b21bed0bf2c3b1a712f4226ec0d297ef797b360b (image=quay.io/ceph/ceph:v20, name=zen_noether, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 29 09:11:33 compute-0 systemd[1]: libpod-conmon-ce95052acf06ce8042953641b21bed0bf2c3b1a712f4226ec0d297ef797b360b.scope: Deactivated successfully.
Jan 29 09:11:33 compute-0 podman[74347]: 2026-01-29 09:11:33.44236569 +0000 UTC m=+0.042131242 container create 6a91a13fb76ca9d4a566bca962dd8d2752a9274165d40f5aa43c2b6c3f3638cb (image=quay.io/ceph/ceph:v20, name=objective_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 09:11:33 compute-0 systemd[1]: Started libpod-conmon-6a91a13fb76ca9d4a566bca962dd8d2752a9274165d40f5aa43c2b6c3f3638cb.scope.
Jan 29 09:11:33 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:33 compute-0 podman[74347]: 2026-01-29 09:11:33.502787425 +0000 UTC m=+0.102553017 container init 6a91a13fb76ca9d4a566bca962dd8d2752a9274165d40f5aa43c2b6c3f3638cb (image=quay.io/ceph/ceph:v20, name=objective_mirzakhani, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 29 09:11:33 compute-0 podman[74347]: 2026-01-29 09:11:33.507147442 +0000 UTC m=+0.106912994 container start 6a91a13fb76ca9d4a566bca962dd8d2752a9274165d40f5aa43c2b6c3f3638cb (image=quay.io/ceph/ceph:v20, name=objective_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:11:33 compute-0 objective_mirzakhani[74361]: 167 167
Jan 29 09:11:33 compute-0 systemd[1]: libpod-6a91a13fb76ca9d4a566bca962dd8d2752a9274165d40f5aa43c2b6c3f3638cb.scope: Deactivated successfully.
Jan 29 09:11:33 compute-0 podman[74347]: 2026-01-29 09:11:33.513485522 +0000 UTC m=+0.113251084 container attach 6a91a13fb76ca9d4a566bca962dd8d2752a9274165d40f5aa43c2b6c3f3638cb (image=quay.io/ceph/ceph:v20, name=objective_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:11:33 compute-0 podman[74347]: 2026-01-29 09:11:33.514845369 +0000 UTC m=+0.114610921 container died 6a91a13fb76ca9d4a566bca962dd8d2752a9274165d40f5aa43c2b6c3f3638cb (image=quay.io/ceph/ceph:v20, name=objective_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:11:33 compute-0 podman[74347]: 2026-01-29 09:11:33.422720903 +0000 UTC m=+0.022486485 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:33 compute-0 podman[74347]: 2026-01-29 09:11:33.552917432 +0000 UTC m=+0.152682984 container remove 6a91a13fb76ca9d4a566bca962dd8d2752a9274165d40f5aa43c2b6c3f3638cb (image=quay.io/ceph/ceph:v20, name=objective_mirzakhani, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:11:33 compute-0 systemd[1]: libpod-conmon-6a91a13fb76ca9d4a566bca962dd8d2752a9274165d40f5aa43c2b6c3f3638cb.scope: Deactivated successfully.
Jan 29 09:11:33 compute-0 podman[74378]: 2026-01-29 09:11:33.604839018 +0000 UTC m=+0.036897683 container create 703878915d45495613aa62e8915d8d07e31ce27c00df3de17be6994bffff401a (image=quay.io/ceph/ceph:v20, name=amazing_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 29 09:11:33 compute-0 systemd[1]: Started libpod-conmon-703878915d45495613aa62e8915d8d07e31ce27c00df3de17be6994bffff401a.scope.
Jan 29 09:11:33 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:33 compute-0 podman[74378]: 2026-01-29 09:11:33.660689489 +0000 UTC m=+0.092748174 container init 703878915d45495613aa62e8915d8d07e31ce27c00df3de17be6994bffff401a (image=quay.io/ceph/ceph:v20, name=amazing_cori, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 09:11:33 compute-0 podman[74378]: 2026-01-29 09:11:33.665071597 +0000 UTC m=+0.097130262 container start 703878915d45495613aa62e8915d8d07e31ce27c00df3de17be6994bffff401a (image=quay.io/ceph/ceph:v20, name=amazing_cori, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 29 09:11:33 compute-0 podman[74378]: 2026-01-29 09:11:33.669026073 +0000 UTC m=+0.101084738 container attach 703878915d45495613aa62e8915d8d07e31ce27c00df3de17be6994bffff401a (image=quay.io/ceph/ceph:v20, name=amazing_cori, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 29 09:11:33 compute-0 amazing_cori[74395]: AQBFJHtpJzW7KBAAUVtqpUYlGuhr7Hn2BT9RmQ==
Jan 29 09:11:33 compute-0 podman[74378]: 2026-01-29 09:11:33.588396336 +0000 UTC m=+0.020455021 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:33 compute-0 systemd[1]: libpod-703878915d45495613aa62e8915d8d07e31ce27c00df3de17be6994bffff401a.scope: Deactivated successfully.
Jan 29 09:11:33 compute-0 podman[74378]: 2026-01-29 09:11:33.686319808 +0000 UTC m=+0.118378473 container died 703878915d45495613aa62e8915d8d07e31ce27c00df3de17be6994bffff401a (image=quay.io/ceph/ceph:v20, name=amazing_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 29 09:11:33 compute-0 podman[74378]: 2026-01-29 09:11:33.717673061 +0000 UTC m=+0.149731726 container remove 703878915d45495613aa62e8915d8d07e31ce27c00df3de17be6994bffff401a (image=quay.io/ceph/ceph:v20, name=amazing_cori, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:11:33 compute-0 systemd[1]: libpod-conmon-703878915d45495613aa62e8915d8d07e31ce27c00df3de17be6994bffff401a.scope: Deactivated successfully.
Jan 29 09:11:33 compute-0 podman[74414]: 2026-01-29 09:11:33.767620754 +0000 UTC m=+0.033679616 container create 1862fbe3948ba6b1bf9e246015a55e537e053e0932fc8d8c12e1fb7ad17f5132 (image=quay.io/ceph/ceph:v20, name=musing_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:11:33 compute-0 systemd[1]: Started libpod-conmon-1862fbe3948ba6b1bf9e246015a55e537e053e0932fc8d8c12e1fb7ad17f5132.scope.
Jan 29 09:11:33 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:33 compute-0 podman[74414]: 2026-01-29 09:11:33.818659496 +0000 UTC m=+0.084718378 container init 1862fbe3948ba6b1bf9e246015a55e537e053e0932fc8d8c12e1fb7ad17f5132 (image=quay.io/ceph/ceph:v20, name=musing_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 29 09:11:33 compute-0 podman[74414]: 2026-01-29 09:11:33.822675674 +0000 UTC m=+0.088734536 container start 1862fbe3948ba6b1bf9e246015a55e537e053e0932fc8d8c12e1fb7ad17f5132 (image=quay.io/ceph/ceph:v20, name=musing_bhabha, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Jan 29 09:11:33 compute-0 podman[74414]: 2026-01-29 09:11:33.8258631 +0000 UTC m=+0.091921992 container attach 1862fbe3948ba6b1bf9e246015a55e537e053e0932fc8d8c12e1fb7ad17f5132 (image=quay.io/ceph/ceph:v20, name=musing_bhabha, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 29 09:11:33 compute-0 musing_bhabha[74430]: AQBFJHtpDu4CMhAA+Fri6jX72yvSeqo52TF4fg==
Jan 29 09:11:33 compute-0 systemd[1]: libpod-1862fbe3948ba6b1bf9e246015a55e537e053e0932fc8d8c12e1fb7ad17f5132.scope: Deactivated successfully.
Jan 29 09:11:33 compute-0 podman[74414]: 2026-01-29 09:11:33.841544991 +0000 UTC m=+0.107603853 container died 1862fbe3948ba6b1bf9e246015a55e537e053e0932fc8d8c12e1fb7ad17f5132 (image=quay.io/ceph/ceph:v20, name=musing_bhabha, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 09:11:33 compute-0 podman[74414]: 2026-01-29 09:11:33.753105074 +0000 UTC m=+0.019163956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:33 compute-0 podman[74414]: 2026-01-29 09:11:33.878490645 +0000 UTC m=+0.144549517 container remove 1862fbe3948ba6b1bf9e246015a55e537e053e0932fc8d8c12e1fb7ad17f5132 (image=quay.io/ceph/ceph:v20, name=musing_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 29 09:11:33 compute-0 systemd[1]: libpod-conmon-1862fbe3948ba6b1bf9e246015a55e537e053e0932fc8d8c12e1fb7ad17f5132.scope: Deactivated successfully.
Jan 29 09:11:33 compute-0 podman[74449]: 2026-01-29 09:11:33.940325467 +0000 UTC m=+0.047246341 container create ce5362f3da3583aea8497a1d65263a132074ce63a9061de12282655aa87a66c0 (image=quay.io/ceph/ceph:v20, name=wizardly_wozniak, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 29 09:11:33 compute-0 systemd[1]: Started libpod-conmon-ce5362f3da3583aea8497a1d65263a132074ce63a9061de12282655aa87a66c0.scope.
Jan 29 09:11:33 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:33 compute-0 podman[74449]: 2026-01-29 09:11:33.994401811 +0000 UTC m=+0.101322695 container init ce5362f3da3583aea8497a1d65263a132074ce63a9061de12282655aa87a66c0 (image=quay.io/ceph/ceph:v20, name=wizardly_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 29 09:11:33 compute-0 podman[74449]: 2026-01-29 09:11:33.999118067 +0000 UTC m=+0.106038951 container start ce5362f3da3583aea8497a1d65263a132074ce63a9061de12282655aa87a66c0 (image=quay.io/ceph/ceph:v20, name=wizardly_wozniak, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:11:34 compute-0 podman[74449]: 2026-01-29 09:11:34.004178943 +0000 UTC m=+0.111099817 container attach ce5362f3da3583aea8497a1d65263a132074ce63a9061de12282655aa87a66c0 (image=quay.io/ceph/ceph:v20, name=wizardly_wozniak, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 29 09:11:34 compute-0 podman[74449]: 2026-01-29 09:11:33.91479431 +0000 UTC m=+0.021715204 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:34 compute-0 wizardly_wozniak[74465]: AQBGJHtpcOFIARAA7PBtyJoEI7BO17gaiD0D1A==
Jan 29 09:11:34 compute-0 systemd[1]: libpod-ce5362f3da3583aea8497a1d65263a132074ce63a9061de12282655aa87a66c0.scope: Deactivated successfully.
Jan 29 09:11:34 compute-0 podman[74449]: 2026-01-29 09:11:34.024540791 +0000 UTC m=+0.131461675 container died ce5362f3da3583aea8497a1d65263a132074ce63a9061de12282655aa87a66c0 (image=quay.io/ceph/ceph:v20, name=wizardly_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:11:34 compute-0 podman[74449]: 2026-01-29 09:11:34.063915339 +0000 UTC m=+0.170836213 container remove ce5362f3da3583aea8497a1d65263a132074ce63a9061de12282655aa87a66c0 (image=quay.io/ceph/ceph:v20, name=wizardly_wozniak, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:11:34 compute-0 systemd[1]: libpod-conmon-ce5362f3da3583aea8497a1d65263a132074ce63a9061de12282655aa87a66c0.scope: Deactivated successfully.
Jan 29 09:11:34 compute-0 podman[74485]: 2026-01-29 09:11:34.119450982 +0000 UTC m=+0.036916883 container create d49c02caf47c69e0d84f005d1c13a75593ae59b033bfdf6993fb099f4900195b (image=quay.io/ceph/ceph:v20, name=musing_villani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:11:34 compute-0 systemd[1]: Started libpod-conmon-d49c02caf47c69e0d84f005d1c13a75593ae59b033bfdf6993fb099f4900195b.scope.
Jan 29 09:11:34 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea23a22047c33eb8a9a8f9dae3baceb0efb8f4111ebfa15d65ced23baaa59523/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:34 compute-0 podman[74485]: 2026-01-29 09:11:34.180920835 +0000 UTC m=+0.098386756 container init d49c02caf47c69e0d84f005d1c13a75593ae59b033bfdf6993fb099f4900195b (image=quay.io/ceph/ceph:v20, name=musing_villani, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:11:34 compute-0 podman[74485]: 2026-01-29 09:11:34.186797033 +0000 UTC m=+0.104262934 container start d49c02caf47c69e0d84f005d1c13a75593ae59b033bfdf6993fb099f4900195b (image=quay.io/ceph/ceph:v20, name=musing_villani, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:11:34 compute-0 podman[74485]: 2026-01-29 09:11:34.190713128 +0000 UTC m=+0.108179059 container attach d49c02caf47c69e0d84f005d1c13a75593ae59b033bfdf6993fb099f4900195b (image=quay.io/ceph/ceph:v20, name=musing_villani, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 29 09:11:34 compute-0 podman[74485]: 2026-01-29 09:11:34.10409947 +0000 UTC m=+0.021565391 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:34 compute-0 musing_villani[74502]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 29 09:11:34 compute-0 musing_villani[74502]: setting min_mon_release = tentacle
Jan 29 09:11:34 compute-0 musing_villani[74502]: /usr/bin/monmaptool: set fsid to 3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:11:34 compute-0 musing_villani[74502]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 29 09:11:34 compute-0 systemd[1]: libpod-d49c02caf47c69e0d84f005d1c13a75593ae59b033bfdf6993fb099f4900195b.scope: Deactivated successfully.
Jan 29 09:11:34 compute-0 podman[74485]: 2026-01-29 09:11:34.213812469 +0000 UTC m=+0.131278370 container died d49c02caf47c69e0d84f005d1c13a75593ae59b033bfdf6993fb099f4900195b (image=quay.io/ceph/ceph:v20, name=musing_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:11:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea23a22047c33eb8a9a8f9dae3baceb0efb8f4111ebfa15d65ced23baaa59523-merged.mount: Deactivated successfully.
Jan 29 09:11:34 compute-0 podman[74485]: 2026-01-29 09:11:34.248984935 +0000 UTC m=+0.166450836 container remove d49c02caf47c69e0d84f005d1c13a75593ae59b033bfdf6993fb099f4900195b (image=quay.io/ceph/ceph:v20, name=musing_villani, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 29 09:11:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 09:11:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 09:11:34 compute-0 systemd[1]: libpod-conmon-d49c02caf47c69e0d84f005d1c13a75593ae59b033bfdf6993fb099f4900195b.scope: Deactivated successfully.
Jan 29 09:11:34 compute-0 podman[74521]: 2026-01-29 09:11:34.308787422 +0000 UTC m=+0.040440718 container create 54c12cab0cfecc6ef57612a35a3be713354bc2468d0a5777722bc3bbac2f3dec (image=quay.io/ceph/ceph:v20, name=bold_noether, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 09:11:34 compute-0 systemd[1]: Started libpod-conmon-54c12cab0cfecc6ef57612a35a3be713354bc2468d0a5777722bc3bbac2f3dec.scope.
Jan 29 09:11:34 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef69cd5007eeceed343ec9c31fa2e297ca480c47c078011f18184c782365f78/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef69cd5007eeceed343ec9c31fa2e297ca480c47c078011f18184c782365f78/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef69cd5007eeceed343ec9c31fa2e297ca480c47c078011f18184c782365f78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef69cd5007eeceed343ec9c31fa2e297ca480c47c078011f18184c782365f78/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:34 compute-0 podman[74521]: 2026-01-29 09:11:34.372247918 +0000 UTC m=+0.103901194 container init 54c12cab0cfecc6ef57612a35a3be713354bc2468d0a5777722bc3bbac2f3dec (image=quay.io/ceph/ceph:v20, name=bold_noether, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 29 09:11:34 compute-0 podman[74521]: 2026-01-29 09:11:34.377081688 +0000 UTC m=+0.108734964 container start 54c12cab0cfecc6ef57612a35a3be713354bc2468d0a5777722bc3bbac2f3dec (image=quay.io/ceph/ceph:v20, name=bold_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:11:34 compute-0 podman[74521]: 2026-01-29 09:11:34.380903671 +0000 UTC m=+0.112556957 container attach 54c12cab0cfecc6ef57612a35a3be713354bc2468d0a5777722bc3bbac2f3dec (image=quay.io/ceph/ceph:v20, name=bold_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:11:34 compute-0 podman[74521]: 2026-01-29 09:11:34.290433379 +0000 UTC m=+0.022086665 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:34 compute-0 systemd[1]: libpod-54c12cab0cfecc6ef57612a35a3be713354bc2468d0a5777722bc3bbac2f3dec.scope: Deactivated successfully.
Jan 29 09:11:34 compute-0 podman[74521]: 2026-01-29 09:11:34.448117538 +0000 UTC m=+0.179770804 container died 54c12cab0cfecc6ef57612a35a3be713354bc2468d0a5777722bc3bbac2f3dec (image=quay.io/ceph/ceph:v20, name=bold_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:11:34 compute-0 podman[74521]: 2026-01-29 09:11:34.49097334 +0000 UTC m=+0.222626606 container remove 54c12cab0cfecc6ef57612a35a3be713354bc2468d0a5777722bc3bbac2f3dec (image=quay.io/ceph/ceph:v20, name=bold_noether, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 09:11:34 compute-0 systemd[1]: libpod-conmon-54c12cab0cfecc6ef57612a35a3be713354bc2468d0a5777722bc3bbac2f3dec.scope: Deactivated successfully.
Jan 29 09:11:34 compute-0 systemd[1]: Reloading.
Jan 29 09:11:34 compute-0 systemd-sysv-generator[74608]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:11:34 compute-0 systemd-rc-local-generator[74604]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:11:34 compute-0 systemd[1]: Reloading.
Jan 29 09:11:34 compute-0 systemd-sysv-generator[74640]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:11:34 compute-0 systemd-rc-local-generator[74637]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:11:34 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Jan 29 09:11:34 compute-0 systemd[1]: Reloading.
Jan 29 09:11:35 compute-0 systemd-rc-local-generator[74681]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:11:35 compute-0 systemd-sysv-generator[74685]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:11:35 compute-0 systemd[1]: Reached target Ceph cluster 3fdce3ca-565d-5459-88e8-1ffe58b48437.
Jan 29 09:11:35 compute-0 systemd[1]: Reloading.
Jan 29 09:11:35 compute-0 systemd-rc-local-generator[74713]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:11:35 compute-0 systemd-sysv-generator[74718]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:11:35 compute-0 systemd[1]: Reloading.
Jan 29 09:11:35 compute-0 systemd-rc-local-generator[74755]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:11:35 compute-0 systemd-sysv-generator[74760]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:11:35 compute-0 systemd[1]: Created slice Slice /system/ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437.
Jan 29 09:11:35 compute-0 systemd[1]: Reached target System Time Set.
Jan 29 09:11:35 compute-0 systemd[1]: Reached target System Time Synchronized.
Jan 29 09:11:35 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 3fdce3ca-565d-5459-88e8-1ffe58b48437...
Jan 29 09:11:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 09:11:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 09:11:35 compute-0 podman[74812]: 2026-01-29 09:11:35.945400409 +0000 UTC m=+0.040715886 container create 62b9430d348ead4e91edff83c6054eda3906193a97ea1af6e9bc4714e6baf4c9 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 09:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a059043fcbb5f263d6d1d72f6c52c03d4b207b16c28e77aceb91a1669392b223/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a059043fcbb5f263d6d1d72f6c52c03d4b207b16c28e77aceb91a1669392b223/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a059043fcbb5f263d6d1d72f6c52c03d4b207b16c28e77aceb91a1669392b223/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a059043fcbb5f263d6d1d72f6c52c03d4b207b16c28e77aceb91a1669392b223/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:35 compute-0 podman[74812]: 2026-01-29 09:11:35.999249787 +0000 UTC m=+0.094565294 container init 62b9430d348ead4e91edff83c6054eda3906193a97ea1af6e9bc4714e6baf4c9 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 29 09:11:36 compute-0 podman[74812]: 2026-01-29 09:11:36.005281729 +0000 UTC m=+0.100597206 container start 62b9430d348ead4e91edff83c6054eda3906193a97ea1af6e9bc4714e6baf4c9 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 29 09:11:36 compute-0 bash[74812]: 62b9430d348ead4e91edff83c6054eda3906193a97ea1af6e9bc4714e6baf4c9
Jan 29 09:11:36 compute-0 podman[74812]: 2026-01-29 09:11:35.927676313 +0000 UTC m=+0.022991820 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:36 compute-0 systemd[1]: Started Ceph mon.compute-0 for 3fdce3ca-565d-5459-88e8-1ffe58b48437.
Jan 29 09:11:36 compute-0 ceph-mon[74831]: set uid:gid to 167:167 (ceph:ceph)
Jan 29 09:11:36 compute-0 ceph-mon[74831]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 29 09:11:36 compute-0 ceph-mon[74831]: pidfile_write: ignore empty --pid-file
Jan 29 09:11:36 compute-0 ceph-mon[74831]: load: jerasure load: lrc 
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: RocksDB version: 7.9.2
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Git sha 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: DB SUMMARY
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: DB Session ID:  IKSW680MWVYLFPQU6TAJ
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: CURRENT file:  CURRENT
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: IDENTITY file:  IDENTITY
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                         Options.error_if_exists: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                       Options.create_if_missing: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                         Options.paranoid_checks: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                                     Options.env: 0x562ac9c28440
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                                Options.info_log: 0x562acc42b3e0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                Options.max_file_opening_threads: 16
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                              Options.statistics: (nil)
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                               Options.use_fsync: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                       Options.max_log_file_size: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                         Options.allow_fallocate: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                        Options.use_direct_reads: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:          Options.create_missing_column_families: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                              Options.db_log_dir: 
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                                 Options.wal_dir: 
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                   Options.advise_random_on_open: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                    Options.write_buffer_manager: 0x562acc3aa140
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                            Options.rate_limiter: (nil)
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                  Options.unordered_write: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                               Options.row_cache: None
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                              Options.wal_filter: None
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.allow_ingest_behind: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.two_write_queues: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.manual_wal_flush: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.wal_compression: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.atomic_flush: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                 Options.log_readahead_size: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.allow_data_in_errors: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.db_host_id: __hostname__
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.max_background_jobs: 2
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.max_background_compactions: -1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.max_subcompactions: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.max_total_wal_size: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                          Options.max_open_files: -1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                          Options.bytes_per_sync: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:       Options.compaction_readahead_size: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                  Options.max_background_flushes: -1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Compression algorithms supported:
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:         kZSTD supported: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:         kXpressCompression supported: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:         kBZip2Compression supported: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:         kLZ4Compression supported: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:         kZlibCompression supported: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:         kLZ4HCCompression supported: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:         kSnappyCompression supported: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:           Options.merge_operator: 
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:        Options.compaction_filter: None
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562acc3b6700)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562acc39b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:        Options.write_buffer_size: 33554432
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:  Options.max_write_buffer_number: 2
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:          Options.compression: NoCompression
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.num_levels: 7
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: fd1ba9b2-de41-4690-b6ef-2d821bd14da8
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677896052619, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677896054504, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677896, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "IKSW680MWVYLFPQU6TAJ", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677896054606, "job": 1, "event": "recovery_finished"}
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562acc3c8e00
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: DB pointer 0x562acc514000
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:11:36 compute-0 ceph-mon[74831]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562acc39b8d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 29 09:11:36 compute-0 ceph-mon[74831]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@-1(???) e0 preinit fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 29 09:11:36 compute-0 ceph-mon[74831]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 29 09:11:36 compute-0 ceph-mon[74831]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 29 09:11:36 compute-0 podman[74832]: 2026-01-29 09:11:36.090330835 +0000 UTC m=+0.046572723 container create d2ae97bde2cde54078c563baf5c832659ddc0d78e50138a67e215f25afba7683 (image=quay.io/ceph/ceph:v20, name=festive_nightingale, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 29 09:11:36 compute-0 ceph-mon[74831]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 29 09:11:36 compute-0 ceph-mon[74831]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 29 09:11:36 compute-0 ceph-mon[74831]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: log_channel(cluster) log [DBG] : fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:11:36 compute-0 ceph-mon[74831]: log_channel(cluster) log [DBG] : last_changed 2026-01-29T09:11:34.210489+0000
Jan 29 09:11:36 compute-0 ceph-mon[74831]: log_channel(cluster) log [DBG] : created 2026-01-29T09:11:34.210489+0000
Jan 29 09:11:36 compute-0 ceph-mon[74831]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 29 09:11:36 compute-0 ceph-mon[74831]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-01-29T09:11:34.410199Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864296,os=Linux}
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).mds e1 new map
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2026-01-29T09:11:36:099108+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 29 09:11:36 compute-0 ceph-mon[74831]: log_channel(cluster) log [DBG] : fsmap 
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mkfs 3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 29 09:11:36 compute-0 ceph-mon[74831]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 29 09:11:36 compute-0 ceph-mon[74831]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 29 09:11:36 compute-0 systemd[1]: Started libpod-conmon-d2ae97bde2cde54078c563baf5c832659ddc0d78e50138a67e215f25afba7683.scope.
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 29 09:11:36 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c09625571a1f0f700f5271134d88f48a1b6b301f2143fffc7a6159e190f7f55d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c09625571a1f0f700f5271134d88f48a1b6b301f2143fffc7a6159e190f7f55d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c09625571a1f0f700f5271134d88f48a1b6b301f2143fffc7a6159e190f7f55d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:36 compute-0 podman[74832]: 2026-01-29 09:11:36.068049806 +0000 UTC m=+0.024291714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:36 compute-0 podman[74832]: 2026-01-29 09:11:36.175875605 +0000 UTC m=+0.132117523 container init d2ae97bde2cde54078c563baf5c832659ddc0d78e50138a67e215f25afba7683 (image=quay.io/ceph/ceph:v20, name=festive_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:11:36 compute-0 podman[74832]: 2026-01-29 09:11:36.184183028 +0000 UTC m=+0.140424916 container start d2ae97bde2cde54078c563baf5c832659ddc0d78e50138a67e215f25afba7683 (image=quay.io/ceph/ceph:v20, name=festive_nightingale, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:11:36 compute-0 podman[74832]: 2026-01-29 09:11:36.188674699 +0000 UTC m=+0.144916607 container attach d2ae97bde2cde54078c563baf5c832659ddc0d78e50138a67e215f25afba7683 (image=quay.io/ceph/ceph:v20, name=festive_nightingale, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 29 09:11:36 compute-0 ceph-mon[74831]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2918842316' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 29 09:11:36 compute-0 festive_nightingale[74886]:   cluster:
Jan 29 09:11:36 compute-0 festive_nightingale[74886]:     id:     3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:11:36 compute-0 festive_nightingale[74886]:     health: HEALTH_OK
Jan 29 09:11:36 compute-0 festive_nightingale[74886]:  
Jan 29 09:11:36 compute-0 festive_nightingale[74886]:   services:
Jan 29 09:11:36 compute-0 festive_nightingale[74886]:     mon: 1 daemons, quorum compute-0 (age 0.290418s) [leader: compute-0]
Jan 29 09:11:36 compute-0 festive_nightingale[74886]:     mgr: no daemons active
Jan 29 09:11:36 compute-0 festive_nightingale[74886]:     osd: 0 osds: 0 up, 0 in
Jan 29 09:11:36 compute-0 festive_nightingale[74886]:  
Jan 29 09:11:36 compute-0 festive_nightingale[74886]:   data:
Jan 29 09:11:36 compute-0 festive_nightingale[74886]:     pools:   0 pools, 0 pgs
Jan 29 09:11:36 compute-0 festive_nightingale[74886]:     objects: 0 objects, 0 B
Jan 29 09:11:36 compute-0 festive_nightingale[74886]:     usage:   0 B used, 0 B / 0 B avail
Jan 29 09:11:36 compute-0 festive_nightingale[74886]:     pgs:     
Jan 29 09:11:36 compute-0 festive_nightingale[74886]:  
Jan 29 09:11:36 compute-0 systemd[1]: libpod-d2ae97bde2cde54078c563baf5c832659ddc0d78e50138a67e215f25afba7683.scope: Deactivated successfully.
Jan 29 09:11:36 compute-0 podman[74832]: 2026-01-29 09:11:36.401798318 +0000 UTC m=+0.358040236 container died d2ae97bde2cde54078c563baf5c832659ddc0d78e50138a67e215f25afba7683 (image=quay.io/ceph/ceph:v20, name=festive_nightingale, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:11:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c09625571a1f0f700f5271134d88f48a1b6b301f2143fffc7a6159e190f7f55d-merged.mount: Deactivated successfully.
Jan 29 09:11:36 compute-0 podman[74832]: 2026-01-29 09:11:36.48556617 +0000 UTC m=+0.441808068 container remove d2ae97bde2cde54078c563baf5c832659ddc0d78e50138a67e215f25afba7683 (image=quay.io/ceph/ceph:v20, name=festive_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 09:11:36 compute-0 systemd[1]: libpod-conmon-d2ae97bde2cde54078c563baf5c832659ddc0d78e50138a67e215f25afba7683.scope: Deactivated successfully.
Jan 29 09:11:36 compute-0 podman[74924]: 2026-01-29 09:11:36.546719814 +0000 UTC m=+0.042522084 container create 142c08adb1804fb9be5f46d1d0c19197c6223f0fb87a346c6ce2b14d78d14998 (image=quay.io/ceph/ceph:v20, name=peaceful_hypatia, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:11:36 compute-0 systemd[1]: Started libpod-conmon-142c08adb1804fb9be5f46d1d0c19197c6223f0fb87a346c6ce2b14d78d14998.scope.
Jan 29 09:11:36 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/782cc692eeeb3baf50af66fe5f6d276a5e5c9f5c85a0d531915f458e8d4c3dfa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/782cc692eeeb3baf50af66fe5f6d276a5e5c9f5c85a0d531915f458e8d4c3dfa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/782cc692eeeb3baf50af66fe5f6d276a5e5c9f5c85a0d531915f458e8d4c3dfa/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/782cc692eeeb3baf50af66fe5f6d276a5e5c9f5c85a0d531915f458e8d4c3dfa/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:36 compute-0 podman[74924]: 2026-01-29 09:11:36.616003937 +0000 UTC m=+0.111806237 container init 142c08adb1804fb9be5f46d1d0c19197c6223f0fb87a346c6ce2b14d78d14998 (image=quay.io/ceph/ceph:v20, name=peaceful_hypatia, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:11:36 compute-0 podman[74924]: 2026-01-29 09:11:36.621751631 +0000 UTC m=+0.117553901 container start 142c08adb1804fb9be5f46d1d0c19197c6223f0fb87a346c6ce2b14d78d14998 (image=quay.io/ceph/ceph:v20, name=peaceful_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:11:36 compute-0 podman[74924]: 2026-01-29 09:11:36.526542392 +0000 UTC m=+0.022344662 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:36 compute-0 podman[74924]: 2026-01-29 09:11:36.645927901 +0000 UTC m=+0.141730201 container attach 142c08adb1804fb9be5f46d1d0c19197c6223f0fb87a346c6ce2b14d78d14998 (image=quay.io/ceph/ceph:v20, name=peaceful_hypatia, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 29 09:11:36 compute-0 ceph-mon[74831]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 29 09:11:36 compute-0 ceph-mon[74831]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3295253017' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 29 09:11:36 compute-0 ceph-mon[74831]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3295253017' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 29 09:11:36 compute-0 peaceful_hypatia[74940]: 
Jan 29 09:11:36 compute-0 peaceful_hypatia[74940]: [global]
Jan 29 09:11:36 compute-0 peaceful_hypatia[74940]:         fsid = 3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:11:36 compute-0 peaceful_hypatia[74940]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 29 09:11:36 compute-0 peaceful_hypatia[74940]:         osd_crush_chooseleaf_type = 0
Jan 29 09:11:36 compute-0 systemd[1]: libpod-142c08adb1804fb9be5f46d1d0c19197c6223f0fb87a346c6ce2b14d78d14998.scope: Deactivated successfully.
Jan 29 09:11:36 compute-0 conmon[74940]: conmon 142c08adb1804fb9be5f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-142c08adb1804fb9be5f46d1d0c19197c6223f0fb87a346c6ce2b14d78d14998.scope/container/memory.events
Jan 29 09:11:36 compute-0 podman[74924]: 2026-01-29 09:11:36.851896828 +0000 UTC m=+0.347699098 container died 142c08adb1804fb9be5f46d1d0c19197c6223f0fb87a346c6ce2b14d78d14998 (image=quay.io/ceph/ceph:v20, name=peaceful_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:11:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-782cc692eeeb3baf50af66fe5f6d276a5e5c9f5c85a0d531915f458e8d4c3dfa-merged.mount: Deactivated successfully.
Jan 29 09:11:36 compute-0 podman[74924]: 2026-01-29 09:11:36.944886098 +0000 UTC m=+0.440688368 container remove 142c08adb1804fb9be5f46d1d0c19197c6223f0fb87a346c6ce2b14d78d14998 (image=quay.io/ceph/ceph:v20, name=peaceful_hypatia, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 29 09:11:36 compute-0 systemd[1]: libpod-conmon-142c08adb1804fb9be5f46d1d0c19197c6223f0fb87a346c6ce2b14d78d14998.scope: Deactivated successfully.
Jan 29 09:11:37 compute-0 podman[74976]: 2026-01-29 09:11:37.004410078 +0000 UTC m=+0.041114526 container create 01e6198fe70cf269adb808dfe2f5f505f7aaeab112241dc3197fc48f0512e304 (image=quay.io/ceph/ceph:v20, name=dreamy_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:11:37 compute-0 systemd[1]: Started libpod-conmon-01e6198fe70cf269adb808dfe2f5f505f7aaeab112241dc3197fc48f0512e304.scope.
Jan 29 09:11:37 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f469432bf798c905f222ca9ab5ac00f8ba716f93dd169d08e65df29a1399c365/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f469432bf798c905f222ca9ab5ac00f8ba716f93dd169d08e65df29a1399c365/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f469432bf798c905f222ca9ab5ac00f8ba716f93dd169d08e65df29a1399c365/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f469432bf798c905f222ca9ab5ac00f8ba716f93dd169d08e65df29a1399c365/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:37 compute-0 podman[74976]: 2026-01-29 09:11:36.986639191 +0000 UTC m=+0.023343669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:37 compute-0 podman[74976]: 2026-01-29 09:11:37.084391288 +0000 UTC m=+0.121095776 container init 01e6198fe70cf269adb808dfe2f5f505f7aaeab112241dc3197fc48f0512e304 (image=quay.io/ceph/ceph:v20, name=dreamy_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:11:37 compute-0 podman[74976]: 2026-01-29 09:11:37.089041632 +0000 UTC m=+0.125746090 container start 01e6198fe70cf269adb808dfe2f5f505f7aaeab112241dc3197fc48f0512e304 (image=quay.io/ceph/ceph:v20, name=dreamy_pascal, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:11:37 compute-0 podman[74976]: 2026-01-29 09:11:37.094460698 +0000 UTC m=+0.131165166 container attach 01e6198fe70cf269adb808dfe2f5f505f7aaeab112241dc3197fc48f0512e304 (image=quay.io/ceph/ceph:v20, name=dreamy_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:11:37 compute-0 ceph-mon[74831]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 29 09:11:37 compute-0 ceph-mon[74831]: monmap epoch 1
Jan 29 09:11:37 compute-0 ceph-mon[74831]: fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:11:37 compute-0 ceph-mon[74831]: last_changed 2026-01-29T09:11:34.210489+0000
Jan 29 09:11:37 compute-0 ceph-mon[74831]: created 2026-01-29T09:11:34.210489+0000
Jan 29 09:11:37 compute-0 ceph-mon[74831]: min_mon_release 20 (tentacle)
Jan 29 09:11:37 compute-0 ceph-mon[74831]: election_strategy: 1
Jan 29 09:11:37 compute-0 ceph-mon[74831]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 29 09:11:37 compute-0 ceph-mon[74831]: fsmap 
Jan 29 09:11:37 compute-0 ceph-mon[74831]: osdmap e1: 0 total, 0 up, 0 in
Jan 29 09:11:37 compute-0 ceph-mon[74831]: mgrmap e1: no daemons active
Jan 29 09:11:37 compute-0 ceph-mon[74831]: from='client.? 192.168.122.100:0/2918842316' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 29 09:11:37 compute-0 ceph-mon[74831]: from='client.? 192.168.122.100:0/3295253017' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 29 09:11:37 compute-0 ceph-mon[74831]: from='client.? 192.168.122.100:0/3295253017' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 29 09:11:37 compute-0 ceph-mon[74831]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:11:37 compute-0 ceph-mon[74831]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2761269626' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:11:37 compute-0 systemd[1]: libpod-01e6198fe70cf269adb808dfe2f5f505f7aaeab112241dc3197fc48f0512e304.scope: Deactivated successfully.
Jan 29 09:11:37 compute-0 podman[74976]: 2026-01-29 09:11:37.282430681 +0000 UTC m=+0.319135159 container died 01e6198fe70cf269adb808dfe2f5f505f7aaeab112241dc3197fc48f0512e304 (image=quay.io/ceph/ceph:v20, name=dreamy_pascal, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:11:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-f469432bf798c905f222ca9ab5ac00f8ba716f93dd169d08e65df29a1399c365-merged.mount: Deactivated successfully.
Jan 29 09:11:37 compute-0 podman[74976]: 2026-01-29 09:11:37.325502529 +0000 UTC m=+0.362206987 container remove 01e6198fe70cf269adb808dfe2f5f505f7aaeab112241dc3197fc48f0512e304 (image=quay.io/ceph/ceph:v20, name=dreamy_pascal, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 09:11:37 compute-0 systemd[1]: libpod-conmon-01e6198fe70cf269adb808dfe2f5f505f7aaeab112241dc3197fc48f0512e304.scope: Deactivated successfully.
Jan 29 09:11:37 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 3fdce3ca-565d-5459-88e8-1ffe58b48437...
Jan 29 09:11:37 compute-0 ceph-mon[74831]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 29 09:11:37 compute-0 ceph-mon[74831]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 29 09:11:37 compute-0 ceph-mon[74831]: mon.compute-0@0(leader) e1 shutdown
Jan 29 09:11:37 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0[74827]: 2026-01-29T09:11:37.489+0000 7f57ca67e640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 29 09:11:37 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0[74827]: 2026-01-29T09:11:37.489+0000 7f57ca67e640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 29 09:11:37 compute-0 ceph-mon[74831]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 29 09:11:37 compute-0 ceph-mon[74831]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 29 09:11:37 compute-0 podman[75061]: 2026-01-29 09:11:37.633489989 +0000 UTC m=+0.174150563 container died 62b9430d348ead4e91edff83c6054eda3906193a97ea1af6e9bc4714e6baf4c9 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 29 09:11:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a059043fcbb5f263d6d1d72f6c52c03d4b207b16c28e77aceb91a1669392b223-merged.mount: Deactivated successfully.
Jan 29 09:11:37 compute-0 podman[75061]: 2026-01-29 09:11:37.671193412 +0000 UTC m=+0.211853986 container remove 62b9430d348ead4e91edff83c6054eda3906193a97ea1af6e9bc4714e6baf4c9 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 29 09:11:37 compute-0 bash[75061]: ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0
Jan 29 09:11:37 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 09:11:37 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 09:11:37 compute-0 systemd[1]: ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437@mon.compute-0.service: Deactivated successfully.
Jan 29 09:11:37 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 3fdce3ca-565d-5459-88e8-1ffe58b48437.
Jan 29 09:11:37 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 3fdce3ca-565d-5459-88e8-1ffe58b48437...
Jan 29 09:11:37 compute-0 podman[75164]: 2026-01-29 09:11:37.961011253 +0000 UTC m=+0.038081684 container create 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 29 09:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb7b704976209d5f2e0d8b686b18a94f27dfb34e058b6cb65a84fecf6d8f3bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb7b704976209d5f2e0d8b686b18a94f27dfb34e058b6cb65a84fecf6d8f3bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb7b704976209d5f2e0d8b686b18a94f27dfb34e058b6cb65a84fecf6d8f3bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb7b704976209d5f2e0d8b686b18a94f27dfb34e058b6cb65a84fecf6d8f3bc/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:38 compute-0 podman[75164]: 2026-01-29 09:11:38.023513664 +0000 UTC m=+0.100584115 container init 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Jan 29 09:11:38 compute-0 podman[75164]: 2026-01-29 09:11:38.02857783 +0000 UTC m=+0.105648261 container start 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 29 09:11:38 compute-0 bash[75164]: 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53
Jan 29 09:11:38 compute-0 podman[75164]: 2026-01-29 09:11:37.942630669 +0000 UTC m=+0.019701120 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:38 compute-0 systemd[1]: Started Ceph mon.compute-0 for 3fdce3ca-565d-5459-88e8-1ffe58b48437.
Jan 29 09:11:38 compute-0 ceph-mon[75183]: set uid:gid to 167:167 (ceph:ceph)
Jan 29 09:11:38 compute-0 ceph-mon[75183]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 29 09:11:38 compute-0 ceph-mon[75183]: pidfile_write: ignore empty --pid-file
Jan 29 09:11:38 compute-0 ceph-mon[75183]: load: jerasure load: lrc 
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: RocksDB version: 7.9.2
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Git sha 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: DB SUMMARY
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: DB Session ID:  ZPKAZ83W1X1TPV5WJTP0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: CURRENT file:  CURRENT
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: IDENTITY file:  IDENTITY
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60239 ; 
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                         Options.error_if_exists: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                       Options.create_if_missing: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                         Options.paranoid_checks: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                                     Options.env: 0x55621c6e0440
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                                Options.info_log: 0x55621d603e80
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                Options.max_file_opening_threads: 16
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                              Options.statistics: (nil)
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                               Options.use_fsync: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                       Options.max_log_file_size: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                         Options.allow_fallocate: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                        Options.use_direct_reads: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:          Options.create_missing_column_families: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                              Options.db_log_dir: 
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                                 Options.wal_dir: 
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                   Options.advise_random_on_open: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                    Options.write_buffer_manager: 0x55621d64e140
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                            Options.rate_limiter: (nil)
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                  Options.unordered_write: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                               Options.row_cache: None
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                              Options.wal_filter: None
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.allow_ingest_behind: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.two_write_queues: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.manual_wal_flush: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.wal_compression: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.atomic_flush: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                 Options.log_readahead_size: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.allow_data_in_errors: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.db_host_id: __hostname__
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.max_background_jobs: 2
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.max_background_compactions: -1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.max_subcompactions: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.max_total_wal_size: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                          Options.max_open_files: -1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                          Options.bytes_per_sync: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:       Options.compaction_readahead_size: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                  Options.max_background_flushes: -1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Compression algorithms supported:
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:         kZSTD supported: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:         kXpressCompression supported: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:         kBZip2Compression supported: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:         kLZ4Compression supported: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:         kZlibCompression supported: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:         kLZ4HCCompression supported: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:         kSnappyCompression supported: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:           Options.merge_operator: 
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:        Options.compaction_filter: None
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55621d65aa00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55621d63f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:        Options.write_buffer_size: 33554432
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:  Options.max_write_buffer_number: 2
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:          Options.compression: NoCompression
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.num_levels: 7
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: fd1ba9b2-de41-4690-b6ef-2d821bd14da8
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677898063967, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677898068876, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58438, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55790, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677898, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677898068985, "job": 1, "event": "recovery_finished"}
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55621d66ce00
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: DB pointer 0x55621d7b6000
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:11:38 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 2.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 2.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55621d63f8d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 29 09:11:38 compute-0 ceph-mon[75183]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:11:38 compute-0 ceph-mon[75183]: mon.compute-0@-1(???) e1 preinit fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:11:38 compute-0 ceph-mon[75183]: mon.compute-0@-1(???).mds e1 new map
Jan 29 09:11:38 compute-0 ceph-mon[75183]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2026-01-29T09:11:36:099108+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 29 09:11:38 compute-0 ceph-mon[75183]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 29 09:11:38 compute-0 ceph-mon[75183]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 29 09:11:38 compute-0 ceph-mon[75183]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 29 09:11:38 compute-0 ceph-mon[75183]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 29 09:11:38 compute-0 ceph-mon[75183]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 29 09:11:38 compute-0 ceph-mon[75183]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 29 09:11:38 compute-0 ceph-mon[75183]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 29 09:11:38 compute-0 ceph-mon[75183]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 29 09:11:38 compute-0 podman[75184]: 2026-01-29 09:11:38.097274347 +0000 UTC m=+0.040838989 container create 4b524d481b7ecc72c68e9eeadacb1dd446efb77d2743c973a9bb5182958dd699 (image=quay.io/ceph/ceph:v20, name=heuristic_galois, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:11:38 compute-0 ceph-mon[75183]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 29 09:11:38 compute-0 ceph-mon[75183]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 29 09:11:38 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:11:38 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : last_changed 2026-01-29T09:11:34.210489+0000
Jan 29 09:11:38 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : created 2026-01-29T09:11:34.210489+0000
Jan 29 09:11:38 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 29 09:11:38 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 29 09:11:38 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : fsmap 
Jan 29 09:11:38 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 29 09:11:38 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 29 09:11:38 compute-0 systemd[1]: Started libpod-conmon-4b524d481b7ecc72c68e9eeadacb1dd446efb77d2743c973a9bb5182958dd699.scope.
Jan 29 09:11:38 compute-0 podman[75184]: 2026-01-29 09:11:38.078352008 +0000 UTC m=+0.021916650 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:38 compute-0 ceph-mon[75183]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 29 09:11:38 compute-0 ceph-mon[75183]: monmap epoch 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:11:38 compute-0 ceph-mon[75183]: last_changed 2026-01-29T09:11:34.210489+0000
Jan 29 09:11:38 compute-0 ceph-mon[75183]: created 2026-01-29T09:11:34.210489+0000
Jan 29 09:11:38 compute-0 ceph-mon[75183]: min_mon_release 20 (tentacle)
Jan 29 09:11:38 compute-0 ceph-mon[75183]: election_strategy: 1
Jan 29 09:11:38 compute-0 ceph-mon[75183]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 29 09:11:38 compute-0 ceph-mon[75183]: fsmap 
Jan 29 09:11:38 compute-0 ceph-mon[75183]: osdmap e1: 0 total, 0 up, 0 in
Jan 29 09:11:38 compute-0 ceph-mon[75183]: mgrmap e1: no daemons active
Jan 29 09:11:38 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82f9d805318b9ff15ebce96e75c2f0c84685fdef298f242a73e3d1d1acbb6331/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82f9d805318b9ff15ebce96e75c2f0c84685fdef298f242a73e3d1d1acbb6331/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82f9d805318b9ff15ebce96e75c2f0c84685fdef298f242a73e3d1d1acbb6331/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:38 compute-0 podman[75184]: 2026-01-29 09:11:38.20080945 +0000 UTC m=+0.144374122 container init 4b524d481b7ecc72c68e9eeadacb1dd446efb77d2743c973a9bb5182958dd699 (image=quay.io/ceph/ceph:v20, name=heuristic_galois, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:11:38 compute-0 podman[75184]: 2026-01-29 09:11:38.208198579 +0000 UTC m=+0.151763221 container start 4b524d481b7ecc72c68e9eeadacb1dd446efb77d2743c973a9bb5182958dd699 (image=quay.io/ceph/ceph:v20, name=heuristic_galois, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:11:38 compute-0 podman[75184]: 2026-01-29 09:11:38.213377218 +0000 UTC m=+0.156941880 container attach 4b524d481b7ecc72c68e9eeadacb1dd446efb77d2743c973a9bb5182958dd699 (image=quay.io/ceph/ceph:v20, name=heuristic_galois, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:11:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Jan 29 09:11:38 compute-0 systemd[1]: libpod-4b524d481b7ecc72c68e9eeadacb1dd446efb77d2743c973a9bb5182958dd699.scope: Deactivated successfully.
Jan 29 09:11:38 compute-0 podman[75184]: 2026-01-29 09:11:38.41991479 +0000 UTC m=+0.363479432 container died 4b524d481b7ecc72c68e9eeadacb1dd446efb77d2743c973a9bb5182958dd699 (image=quay.io/ceph/ceph:v20, name=heuristic_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 29 09:11:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-82f9d805318b9ff15ebce96e75c2f0c84685fdef298f242a73e3d1d1acbb6331-merged.mount: Deactivated successfully.
Jan 29 09:11:38 compute-0 podman[75184]: 2026-01-29 09:11:38.456600586 +0000 UTC m=+0.400165228 container remove 4b524d481b7ecc72c68e9eeadacb1dd446efb77d2743c973a9bb5182958dd699 (image=quay.io/ceph/ceph:v20, name=heuristic_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 29 09:11:38 compute-0 systemd[1]: libpod-conmon-4b524d481b7ecc72c68e9eeadacb1dd446efb77d2743c973a9bb5182958dd699.scope: Deactivated successfully.
Jan 29 09:11:38 compute-0 podman[75275]: 2026-01-29 09:11:38.519472037 +0000 UTC m=+0.044734384 container create 5ab348983d96c7202454279155fd878535e06abd60f6da8fdee90a75f134a028 (image=quay.io/ceph/ceph:v20, name=exciting_joliot, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 29 09:11:38 compute-0 systemd[1]: Started libpod-conmon-5ab348983d96c7202454279155fd878535e06abd60f6da8fdee90a75f134a028.scope.
Jan 29 09:11:38 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:38 compute-0 podman[75275]: 2026-01-29 09:11:38.499595592 +0000 UTC m=+0.024857929 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda3da3d85c117fd55351f735b69278e2c6187bdf3520c76f14eb83ea493e9ec/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda3da3d85c117fd55351f735b69278e2c6187bdf3520c76f14eb83ea493e9ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda3da3d85c117fd55351f735b69278e2c6187bdf3520c76f14eb83ea493e9ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:38 compute-0 podman[75275]: 2026-01-29 09:11:38.610869404 +0000 UTC m=+0.136131721 container init 5ab348983d96c7202454279155fd878535e06abd60f6da8fdee90a75f134a028 (image=quay.io/ceph/ceph:v20, name=exciting_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 09:11:38 compute-0 podman[75275]: 2026-01-29 09:11:38.616804073 +0000 UTC m=+0.142066390 container start 5ab348983d96c7202454279155fd878535e06abd60f6da8fdee90a75f134a028 (image=quay.io/ceph/ceph:v20, name=exciting_joliot, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:11:38 compute-0 podman[75275]: 2026-01-29 09:11:38.620863752 +0000 UTC m=+0.146126089 container attach 5ab348983d96c7202454279155fd878535e06abd60f6da8fdee90a75f134a028 (image=quay.io/ceph/ceph:v20, name=exciting_joliot, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 29 09:11:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Jan 29 09:11:38 compute-0 systemd[1]: libpod-5ab348983d96c7202454279155fd878535e06abd60f6da8fdee90a75f134a028.scope: Deactivated successfully.
Jan 29 09:11:38 compute-0 podman[75275]: 2026-01-29 09:11:38.838444601 +0000 UTC m=+0.363706908 container died 5ab348983d96c7202454279155fd878535e06abd60f6da8fdee90a75f134a028 (image=quay.io/ceph/ceph:v20, name=exciting_joliot, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 29 09:11:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-bda3da3d85c117fd55351f735b69278e2c6187bdf3520c76f14eb83ea493e9ec-merged.mount: Deactivated successfully.
Jan 29 09:11:38 compute-0 podman[75275]: 2026-01-29 09:11:38.873892134 +0000 UTC m=+0.399154441 container remove 5ab348983d96c7202454279155fd878535e06abd60f6da8fdee90a75f134a028 (image=quay.io/ceph/ceph:v20, name=exciting_joliot, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:11:38 compute-0 systemd[1]: libpod-conmon-5ab348983d96c7202454279155fd878535e06abd60f6da8fdee90a75f134a028.scope: Deactivated successfully.
Jan 29 09:11:38 compute-0 systemd[1]: Reloading.
Jan 29 09:11:39 compute-0 systemd-sysv-generator[75357]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:11:39 compute-0 systemd-rc-local-generator[75352]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:11:39 compute-0 systemd[1]: Reloading.
Jan 29 09:11:39 compute-0 systemd-sysv-generator[75400]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:11:39 compute-0 systemd-rc-local-generator[75397]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:11:39 compute-0 systemd[1]: Starting Ceph mgr.compute-0.ucpkkb for 3fdce3ca-565d-5459-88e8-1ffe58b48437...
Jan 29 09:11:39 compute-0 podman[75454]: 2026-01-29 09:11:39.629209919 +0000 UTC m=+0.038525096 container create 673f2b22a08be0934ca4b5dad3d4e0810fcafeefaf366f800286e6e3cf4bb77d (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-ucpkkb, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 09:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50b47df1590a7f29e81707b61b16adfd6af986ef8b53964045fdd510119a50dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50b47df1590a7f29e81707b61b16adfd6af986ef8b53964045fdd510119a50dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50b47df1590a7f29e81707b61b16adfd6af986ef8b53964045fdd510119a50dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50b47df1590a7f29e81707b61b16adfd6af986ef8b53964045fdd510119a50dc/merged/var/lib/ceph/mgr/ceph-compute-0.ucpkkb supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:39 compute-0 podman[75454]: 2026-01-29 09:11:39.678359741 +0000 UTC m=+0.087674948 container init 673f2b22a08be0934ca4b5dad3d4e0810fcafeefaf366f800286e6e3cf4bb77d (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-ucpkkb, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:11:39 compute-0 podman[75454]: 2026-01-29 09:11:39.685221605 +0000 UTC m=+0.094536782 container start 673f2b22a08be0934ca4b5dad3d4e0810fcafeefaf366f800286e6e3cf4bb77d (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-ucpkkb, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 29 09:11:39 compute-0 bash[75454]: 673f2b22a08be0934ca4b5dad3d4e0810fcafeefaf366f800286e6e3cf4bb77d
Jan 29 09:11:39 compute-0 podman[75454]: 2026-01-29 09:11:39.61284927 +0000 UTC m=+0.022164467 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:39 compute-0 systemd[1]: Started Ceph mgr.compute-0.ucpkkb for 3fdce3ca-565d-5459-88e8-1ffe58b48437.
Jan 29 09:11:39 compute-0 ceph-mgr[75473]: set uid:gid to 167:167 (ceph:ceph)
Jan 29 09:11:39 compute-0 ceph-mgr[75473]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 29 09:11:39 compute-0 ceph-mgr[75473]: pidfile_write: ignore empty --pid-file
Jan 29 09:11:39 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'alerts'
Jan 29 09:11:39 compute-0 podman[75474]: 2026-01-29 09:11:39.75868898 +0000 UTC m=+0.037708524 container create 77f1aeec52faaa7774fe4431c4ce4668847b8f939fe223e3b319b91e318d04d8 (image=quay.io/ceph/ceph:v20, name=thirsty_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:11:39 compute-0 systemd[1]: Started libpod-conmon-77f1aeec52faaa7774fe4431c4ce4668847b8f939fe223e3b319b91e318d04d8.scope.
Jan 29 09:11:39 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c32dbae938f49d990990cf3a7facb30687a1267bc3bf74119f9ab194e00055fb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c32dbae938f49d990990cf3a7facb30687a1267bc3bf74119f9ab194e00055fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c32dbae938f49d990990cf3a7facb30687a1267bc3bf74119f9ab194e00055fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:39 compute-0 podman[75474]: 2026-01-29 09:11:39.840912391 +0000 UTC m=+0.119931965 container init 77f1aeec52faaa7774fe4431c4ce4668847b8f939fe223e3b319b91e318d04d8 (image=quay.io/ceph/ceph:v20, name=thirsty_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:11:39 compute-0 podman[75474]: 2026-01-29 09:11:39.743578074 +0000 UTC m=+0.022597648 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:39 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'balancer'
Jan 29 09:11:39 compute-0 podman[75474]: 2026-01-29 09:11:39.847745874 +0000 UTC m=+0.126765418 container start 77f1aeec52faaa7774fe4431c4ce4668847b8f939fe223e3b319b91e318d04d8 (image=quay.io/ceph/ceph:v20, name=thirsty_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:11:39 compute-0 podman[75474]: 2026-01-29 09:11:39.852985645 +0000 UTC m=+0.132005189 container attach 77f1aeec52faaa7774fe4431c4ce4668847b8f939fe223e3b319b91e318d04d8 (image=quay.io/ceph/ceph:v20, name=thirsty_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 29 09:11:39 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'cephadm'
Jan 29 09:11:40 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 29 09:11:40 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3277683672' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]: 
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]: {
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     "fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     "health": {
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "status": "HEALTH_OK",
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "checks": {},
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "mutes": []
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     },
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     "election_epoch": 5,
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     "quorum": [
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         0
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     ],
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     "quorum_names": [
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "compute-0"
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     ],
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     "quorum_age": 1,
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     "monmap": {
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "epoch": 1,
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "min_mon_release_name": "tentacle",
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "num_mons": 1
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     },
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     "osdmap": {
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "epoch": 1,
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "num_osds": 0,
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "num_up_osds": 0,
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "osd_up_since": 0,
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "num_in_osds": 0,
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "osd_in_since": 0,
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "num_remapped_pgs": 0
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     },
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     "pgmap": {
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "pgs_by_state": [],
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "num_pgs": 0,
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "num_pools": 0,
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "num_objects": 0,
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "data_bytes": 0,
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "bytes_used": 0,
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "bytes_avail": 0,
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "bytes_total": 0
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     },
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     "fsmap": {
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "epoch": 1,
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "btime": "2026-01-29T09:11:36:099108+0000",
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "by_rank": [],
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "up:standby": 0
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     },
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     "mgrmap": {
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "available": false,
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "num_standbys": 0,
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "modules": [
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:             "iostat",
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:             "nfs"
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         ],
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "services": {}
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     },
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     "servicemap": {
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "epoch": 1,
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "modified": "2026-01-29T09:11:36.103347+0000",
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:         "services": {}
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     },
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]:     "progress_events": {}
Jan 29 09:11:40 compute-0 thirsty_khorana[75511]: }
Jan 29 09:11:40 compute-0 systemd[1]: libpod-77f1aeec52faaa7774fe4431c4ce4668847b8f939fe223e3b319b91e318d04d8.scope: Deactivated successfully.
Jan 29 09:11:40 compute-0 podman[75474]: 2026-01-29 09:11:40.078368104 +0000 UTC m=+0.357387648 container died 77f1aeec52faaa7774fe4431c4ce4668847b8f939fe223e3b319b91e318d04d8 (image=quay.io/ceph/ceph:v20, name=thirsty_khorana, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 29 09:11:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-c32dbae938f49d990990cf3a7facb30687a1267bc3bf74119f9ab194e00055fb-merged.mount: Deactivated successfully.
Jan 29 09:11:40 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3277683672' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 29 09:11:40 compute-0 podman[75474]: 2026-01-29 09:11:40.127642159 +0000 UTC m=+0.406661703 container remove 77f1aeec52faaa7774fe4431c4ce4668847b8f939fe223e3b319b91e318d04d8 (image=quay.io/ceph/ceph:v20, name=thirsty_khorana, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:11:40 compute-0 systemd[1]: libpod-conmon-77f1aeec52faaa7774fe4431c4ce4668847b8f939fe223e3b319b91e318d04d8.scope: Deactivated successfully.
Jan 29 09:11:40 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'crash'
Jan 29 09:11:40 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'dashboard'
Jan 29 09:11:41 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'devicehealth'
Jan 29 09:11:41 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'diskprediction_local'
Jan 29 09:11:41 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-ucpkkb[75469]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 29 09:11:41 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-ucpkkb[75469]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 29 09:11:41 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-ucpkkb[75469]:   from numpy import show_config as show_numpy_config
Jan 29 09:11:41 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'influx'
Jan 29 09:11:41 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'insights'
Jan 29 09:11:41 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'iostat'
Jan 29 09:11:41 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'k8sevents'
Jan 29 09:11:42 compute-0 podman[75560]: 2026-01-29 09:11:42.19148624 +0000 UTC m=+0.040724356 container create 915b5e9d17e3d863a391a686d12e78b0d6169d205d398e30ab1ae776babad465 (image=quay.io/ceph/ceph:v20, name=dreamy_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 29 09:11:42 compute-0 systemd[1]: Started libpod-conmon-915b5e9d17e3d863a391a686d12e78b0d6169d205d398e30ab1ae776babad465.scope.
Jan 29 09:11:42 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'localpool'
Jan 29 09:11:42 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db61f0c7a086746a6889492469da7b7ed17e613c389985cee01739c75256a2b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db61f0c7a086746a6889492469da7b7ed17e613c389985cee01739c75256a2b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db61f0c7a086746a6889492469da7b7ed17e613c389985cee01739c75256a2b2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:42 compute-0 podman[75560]: 2026-01-29 09:11:42.270195796 +0000 UTC m=+0.119433932 container init 915b5e9d17e3d863a391a686d12e78b0d6169d205d398e30ab1ae776babad465 (image=quay.io/ceph/ceph:v20, name=dreamy_payne, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:11:42 compute-0 podman[75560]: 2026-01-29 09:11:42.174817262 +0000 UTC m=+0.024055398 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:42 compute-0 podman[75560]: 2026-01-29 09:11:42.274600324 +0000 UTC m=+0.123838440 container start 915b5e9d17e3d863a391a686d12e78b0d6169d205d398e30ab1ae776babad465 (image=quay.io/ceph/ceph:v20, name=dreamy_payne, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:11:42 compute-0 podman[75560]: 2026-01-29 09:11:42.278401187 +0000 UTC m=+0.127639323 container attach 915b5e9d17e3d863a391a686d12e78b0d6169d205d398e30ab1ae776babad465 (image=quay.io/ceph/ceph:v20, name=dreamy_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 29 09:11:42 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'mds_autoscaler'
Jan 29 09:11:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 29 09:11:42 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1212978361' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 29 09:11:42 compute-0 dreamy_payne[75577]: 
Jan 29 09:11:42 compute-0 dreamy_payne[75577]: {
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     "fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     "health": {
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "status": "HEALTH_OK",
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "checks": {},
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "mutes": []
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     },
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     "election_epoch": 5,
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     "quorum": [
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         0
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     ],
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     "quorum_names": [
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "compute-0"
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     ],
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     "quorum_age": 4,
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     "monmap": {
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "epoch": 1,
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "min_mon_release_name": "tentacle",
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "num_mons": 1
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     },
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     "osdmap": {
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "epoch": 1,
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "num_osds": 0,
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "num_up_osds": 0,
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "osd_up_since": 0,
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "num_in_osds": 0,
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "osd_in_since": 0,
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "num_remapped_pgs": 0
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     },
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     "pgmap": {
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "pgs_by_state": [],
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "num_pgs": 0,
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "num_pools": 0,
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "num_objects": 0,
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "data_bytes": 0,
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "bytes_used": 0,
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "bytes_avail": 0,
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "bytes_total": 0
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     },
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     "fsmap": {
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "epoch": 1,
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "btime": "2026-01-29T09:11:36:099108+0000",
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "by_rank": [],
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "up:standby": 0
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     },
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     "mgrmap": {
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "available": false,
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "num_standbys": 0,
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "modules": [
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:             "iostat",
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:             "nfs"
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         ],
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "services": {}
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     },
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     "servicemap": {
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "epoch": 1,
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "modified": "2026-01-29T09:11:36.103347+0000",
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:         "services": {}
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     },
Jan 29 09:11:42 compute-0 dreamy_payne[75577]:     "progress_events": {}
Jan 29 09:11:42 compute-0 dreamy_payne[75577]: }
Jan 29 09:11:42 compute-0 systemd[1]: libpod-915b5e9d17e3d863a391a686d12e78b0d6169d205d398e30ab1ae776babad465.scope: Deactivated successfully.
Jan 29 09:11:42 compute-0 podman[75560]: 2026-01-29 09:11:42.495072821 +0000 UTC m=+0.344310937 container died 915b5e9d17e3d863a391a686d12e78b0d6169d205d398e30ab1ae776babad465 (image=quay.io/ceph/ceph:v20, name=dreamy_payne, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 29 09:11:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-db61f0c7a086746a6889492469da7b7ed17e613c389985cee01739c75256a2b2-merged.mount: Deactivated successfully.
Jan 29 09:11:42 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1212978361' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 29 09:11:42 compute-0 podman[75560]: 2026-01-29 09:11:42.530654078 +0000 UTC m=+0.379892194 container remove 915b5e9d17e3d863a391a686d12e78b0d6169d205d398e30ab1ae776babad465 (image=quay.io/ceph/ceph:v20, name=dreamy_payne, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:11:42 compute-0 systemd[1]: libpod-conmon-915b5e9d17e3d863a391a686d12e78b0d6169d205d398e30ab1ae776babad465.scope: Deactivated successfully.
Jan 29 09:11:42 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'mirroring'
Jan 29 09:11:42 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'nfs'
Jan 29 09:11:42 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'orchestrator'
Jan 29 09:11:43 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'osd_perf_query'
Jan 29 09:11:43 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'osd_support'
Jan 29 09:11:43 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'pg_autoscaler'
Jan 29 09:11:43 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'progress'
Jan 29 09:11:43 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'prometheus'
Jan 29 09:11:43 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'rbd_support'
Jan 29 09:11:43 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'rgw'
Jan 29 09:11:44 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'rook'
Jan 29 09:11:44 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'selftest'
Jan 29 09:11:44 compute-0 podman[75615]: 2026-01-29 09:11:44.593163423 +0000 UTC m=+0.039508423 container create c35651372303242048190f4c1cd8f9723a955d326e5bb473b722f27f74182d31 (image=quay.io/ceph/ceph:v20, name=practical_ishizaka, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:11:44 compute-0 systemd[1]: Started libpod-conmon-c35651372303242048190f4c1cd8f9723a955d326e5bb473b722f27f74182d31.scope.
Jan 29 09:11:44 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'smb'
Jan 29 09:11:44 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557541ad622537d6cff7812cee150321cb7a55be4bc08c88896ab0034258a14a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557541ad622537d6cff7812cee150321cb7a55be4bc08c88896ab0034258a14a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557541ad622537d6cff7812cee150321cb7a55be4bc08c88896ab0034258a14a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:44 compute-0 podman[75615]: 2026-01-29 09:11:44.665293712 +0000 UTC m=+0.111638742 container init c35651372303242048190f4c1cd8f9723a955d326e5bb473b722f27f74182d31 (image=quay.io/ceph/ceph:v20, name=practical_ishizaka, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:11:44 compute-0 podman[75615]: 2026-01-29 09:11:44.669698411 +0000 UTC m=+0.116043421 container start c35651372303242048190f4c1cd8f9723a955d326e5bb473b722f27f74182d31 (image=quay.io/ceph/ceph:v20, name=practical_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Jan 29 09:11:44 compute-0 podman[75615]: 2026-01-29 09:11:44.674736736 +0000 UTC m=+0.121081756 container attach c35651372303242048190f4c1cd8f9723a955d326e5bb473b722f27f74182d31 (image=quay.io/ceph/ceph:v20, name=practical_ishizaka, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 09:11:44 compute-0 podman[75615]: 2026-01-29 09:11:44.575071767 +0000 UTC m=+0.021416797 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 29 09:11:44 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3964768178' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]: 
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]: {
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     "fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     "health": {
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "status": "HEALTH_OK",
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "checks": {},
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "mutes": []
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     },
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     "election_epoch": 5,
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     "quorum": [
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         0
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     ],
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     "quorum_names": [
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "compute-0"
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     ],
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     "quorum_age": 6,
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     "monmap": {
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "epoch": 1,
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "min_mon_release_name": "tentacle",
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "num_mons": 1
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     },
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     "osdmap": {
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "epoch": 1,
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "num_osds": 0,
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "num_up_osds": 0,
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "osd_up_since": 0,
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "num_in_osds": 0,
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "osd_in_since": 0,
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "num_remapped_pgs": 0
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     },
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     "pgmap": {
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "pgs_by_state": [],
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "num_pgs": 0,
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "num_pools": 0,
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "num_objects": 0,
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "data_bytes": 0,
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "bytes_used": 0,
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "bytes_avail": 0,
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "bytes_total": 0
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     },
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     "fsmap": {
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "epoch": 1,
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "btime": "2026-01-29T09:11:36:099108+0000",
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "by_rank": [],
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "up:standby": 0
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     },
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     "mgrmap": {
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "available": false,
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "num_standbys": 0,
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "modules": [
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:             "iostat",
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:             "nfs"
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         ],
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "services": {}
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     },
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     "servicemap": {
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "epoch": 1,
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "modified": "2026-01-29T09:11:36.103347+0000",
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:         "services": {}
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     },
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]:     "progress_events": {}
Jan 29 09:11:44 compute-0 practical_ishizaka[75632]: }
Jan 29 09:11:44 compute-0 systemd[1]: libpod-c35651372303242048190f4c1cd8f9723a955d326e5bb473b722f27f74182d31.scope: Deactivated successfully.
Jan 29 09:11:44 compute-0 podman[75615]: 2026-01-29 09:11:44.888744729 +0000 UTC m=+0.335089729 container died c35651372303242048190f4c1cd8f9723a955d326e5bb473b722f27f74182d31 (image=quay.io/ceph/ceph:v20, name=practical_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:11:44 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'snap_schedule'
Jan 29 09:11:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-557541ad622537d6cff7812cee150321cb7a55be4bc08c88896ab0034258a14a-merged.mount: Deactivated successfully.
Jan 29 09:11:44 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3964768178' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 29 09:11:44 compute-0 podman[75615]: 2026-01-29 09:11:44.93451392 +0000 UTC m=+0.380858920 container remove c35651372303242048190f4c1cd8f9723a955d326e5bb473b722f27f74182d31 (image=quay.io/ceph/ceph:v20, name=practical_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 09:11:44 compute-0 systemd[1]: libpod-conmon-c35651372303242048190f4c1cd8f9723a955d326e5bb473b722f27f74182d31.scope: Deactivated successfully.
Jan 29 09:11:44 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'stats'
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'status'
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'telegraf'
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'telemetry'
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'test_orchestrator'
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'volumes'
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: ms_deliver_dispatch: unhandled message 0x556852a03860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 29 09:11:45 compute-0 ceph-mon[75183]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ucpkkb
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: mgr handle_mgr_map Activating!
Jan 29 09:11:45 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.ucpkkb(active, starting, since 0.0129448s)
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: mgr handle_mgr_map I am now activating
Jan 29 09:11:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 29 09:11:45 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1096537851' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mds metadata"} : dispatch
Jan 29 09:11:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).mds e1 all = 1
Jan 29 09:11:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 29 09:11:45 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1096537851' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata"} : dispatch
Jan 29 09:11:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 29 09:11:45 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1096537851' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mon metadata"} : dispatch
Jan 29 09:11:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 29 09:11:45 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1096537851' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 29 09:11:45 compute-0 ceph-mon[75183]: Activating manager daemon compute-0.ucpkkb
Jan 29 09:11:45 compute-0 ceph-mon[75183]: mgrmap e2: compute-0.ucpkkb(active, starting, since 0.0129448s)
Jan 29 09:11:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ucpkkb", "id": "compute-0.ucpkkb"} v 0)
Jan 29 09:11:45 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1096537851' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mgr metadata", "who": "compute-0.ucpkkb", "id": "compute-0.ucpkkb"} : dispatch
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: balancer
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: crash
Jan 29 09:11:45 compute-0 ceph-mon[75183]: log_channel(cluster) log [INF] : Manager daemon compute-0.ucpkkb is now available
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [balancer INFO root] Starting
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: devicehealth
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [devicehealth INFO root] Starting
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: iostat
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:11:45
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [balancer INFO root] No pools available
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: nfs
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: orchestrator
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: pg_autoscaler
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: progress
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [progress INFO root] Loading...
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [progress INFO root] No stored events to load
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [progress INFO root] Loaded [] historic events
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [progress INFO root] Loaded OSDMap, ready.
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [rbd_support INFO root] recovery thread starting
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [rbd_support INFO root] starting setup
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: rbd_support
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: status
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ucpkkb/mirror_snapshot_schedule"} v 0)
Jan 29 09:11:45 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1096537851' entity='mgr.compute-0.ucpkkb' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ucpkkb/mirror_snapshot_schedule"} : dispatch
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: telemetry
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [rbd_support INFO root] PerfHandler: starting
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TaskHandler: starting
Jan 29 09:11:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ucpkkb/trash_purge_schedule"} v 0)
Jan 29 09:11:45 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1096537851' entity='mgr.compute-0.ucpkkb' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ucpkkb/trash_purge_schedule"} : dispatch
Jan 29 09:11:45 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1096537851' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: [rbd_support INFO root] setup complete
Jan 29 09:11:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Jan 29 09:11:45 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: volumes
Jan 29 09:11:45 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1096537851' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:11:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Jan 29 09:11:45 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1096537851' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:11:46 compute-0 ceph-mon[75183]: from='mgr.14102 192.168.122.100:0/1096537851' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mds metadata"} : dispatch
Jan 29 09:11:46 compute-0 ceph-mon[75183]: from='mgr.14102 192.168.122.100:0/1096537851' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata"} : dispatch
Jan 29 09:11:46 compute-0 ceph-mon[75183]: from='mgr.14102 192.168.122.100:0/1096537851' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mon metadata"} : dispatch
Jan 29 09:11:46 compute-0 ceph-mon[75183]: from='mgr.14102 192.168.122.100:0/1096537851' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 29 09:11:46 compute-0 ceph-mon[75183]: from='mgr.14102 192.168.122.100:0/1096537851' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mgr metadata", "who": "compute-0.ucpkkb", "id": "compute-0.ucpkkb"} : dispatch
Jan 29 09:11:46 compute-0 ceph-mon[75183]: Manager daemon compute-0.ucpkkb is now available
Jan 29 09:11:46 compute-0 ceph-mon[75183]: from='mgr.14102 192.168.122.100:0/1096537851' entity='mgr.compute-0.ucpkkb' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ucpkkb/mirror_snapshot_schedule"} : dispatch
Jan 29 09:11:46 compute-0 ceph-mon[75183]: from='mgr.14102 192.168.122.100:0/1096537851' entity='mgr.compute-0.ucpkkb' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ucpkkb/trash_purge_schedule"} : dispatch
Jan 29 09:11:46 compute-0 ceph-mon[75183]: from='mgr.14102 192.168.122.100:0/1096537851' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:11:46 compute-0 ceph-mon[75183]: from='mgr.14102 192.168.122.100:0/1096537851' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:11:46 compute-0 ceph-mon[75183]: from='mgr.14102 192.168.122.100:0/1096537851' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:11:46 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.ucpkkb(active, since 1.0416s)
Jan 29 09:11:47 compute-0 podman[75748]: 2026-01-29 09:11:47.015692628 +0000 UTC m=+0.061109354 container create a9dd59b7fd042817b8bd23fd8dd10ce862a911b6c4421e4264fcaf15f850fd66 (image=quay.io/ceph/ceph:v20, name=priceless_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:11:47 compute-0 systemd[1]: Started libpod-conmon-a9dd59b7fd042817b8bd23fd8dd10ce862a911b6c4421e4264fcaf15f850fd66.scope.
Jan 29 09:11:47 compute-0 podman[75748]: 2026-01-29 09:11:46.976499044 +0000 UTC m=+0.021915830 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:47 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20992d09f11c127ac1aa07d3bdeb4bf14b4e2b28e7affff766dfbdd8f47098f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20992d09f11c127ac1aa07d3bdeb4bf14b4e2b28e7affff766dfbdd8f47098f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20992d09f11c127ac1aa07d3bdeb4bf14b4e2b28e7affff766dfbdd8f47098f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:47 compute-0 podman[75748]: 2026-01-29 09:11:47.097447946 +0000 UTC m=+0.142864682 container init a9dd59b7fd042817b8bd23fd8dd10ce862a911b6c4421e4264fcaf15f850fd66 (image=quay.io/ceph/ceph:v20, name=priceless_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 09:11:47 compute-0 podman[75748]: 2026-01-29 09:11:47.103897019 +0000 UTC m=+0.149313735 container start a9dd59b7fd042817b8bd23fd8dd10ce862a911b6c4421e4264fcaf15f850fd66 (image=quay.io/ceph/ceph:v20, name=priceless_fermat, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:11:47 compute-0 podman[75748]: 2026-01-29 09:11:47.108108952 +0000 UTC m=+0.153525688 container attach a9dd59b7fd042817b8bd23fd8dd10ce862a911b6c4421e4264fcaf15f850fd66 (image=quay.io/ceph/ceph:v20, name=priceless_fermat, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:11:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 29 09:11:47 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3936971962' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 29 09:11:47 compute-0 priceless_fermat[75764]: 
Jan 29 09:11:47 compute-0 priceless_fermat[75764]: {
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     "fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     "health": {
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "status": "HEALTH_OK",
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "checks": {},
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "mutes": []
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     },
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     "election_epoch": 5,
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     "quorum": [
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         0
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     ],
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     "quorum_names": [
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "compute-0"
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     ],
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     "quorum_age": 9,
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     "monmap": {
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "epoch": 1,
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "min_mon_release_name": "tentacle",
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "num_mons": 1
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     },
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     "osdmap": {
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "epoch": 1,
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "num_osds": 0,
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "num_up_osds": 0,
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "osd_up_since": 0,
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "num_in_osds": 0,
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "osd_in_since": 0,
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "num_remapped_pgs": 0
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     },
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     "pgmap": {
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "pgs_by_state": [],
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "num_pgs": 0,
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "num_pools": 0,
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "num_objects": 0,
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "data_bytes": 0,
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "bytes_used": 0,
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "bytes_avail": 0,
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "bytes_total": 0
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     },
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     "fsmap": {
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "epoch": 1,
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "btime": "2026-01-29T09:11:36:099108+0000",
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "by_rank": [],
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "up:standby": 0
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     },
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     "mgrmap": {
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "available": true,
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "num_standbys": 0,
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "modules": [
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:             "iostat",
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:             "nfs"
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         ],
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "services": {}
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     },
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     "servicemap": {
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "epoch": 1,
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "modified": "2026-01-29T09:11:36.103347+0000",
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:         "services": {}
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     },
Jan 29 09:11:47 compute-0 priceless_fermat[75764]:     "progress_events": {}
Jan 29 09:11:47 compute-0 priceless_fermat[75764]: }
Jan 29 09:11:47 compute-0 systemd[1]: libpod-a9dd59b7fd042817b8bd23fd8dd10ce862a911b6c4421e4264fcaf15f850fd66.scope: Deactivated successfully.
Jan 29 09:11:47 compute-0 podman[75748]: 2026-01-29 09:11:47.689867281 +0000 UTC m=+0.735283997 container died a9dd59b7fd042817b8bd23fd8dd10ce862a911b6c4421e4264fcaf15f850fd66 (image=quay.io/ceph/ceph:v20, name=priceless_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 29 09:11:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-a20992d09f11c127ac1aa07d3bdeb4bf14b4e2b28e7affff766dfbdd8f47098f-merged.mount: Deactivated successfully.
Jan 29 09:11:47 compute-0 podman[75748]: 2026-01-29 09:11:47.717553296 +0000 UTC m=+0.762970012 container remove a9dd59b7fd042817b8bd23fd8dd10ce862a911b6c4421e4264fcaf15f850fd66 (image=quay.io/ceph/ceph:v20, name=priceless_fermat, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 09:11:47 compute-0 systemd[1]: libpod-conmon-a9dd59b7fd042817b8bd23fd8dd10ce862a911b6c4421e4264fcaf15f850fd66.scope: Deactivated successfully.
Jan 29 09:11:47 compute-0 podman[75800]: 2026-01-29 09:11:47.774101135 +0000 UTC m=+0.040102118 container create 68db780e53f6afa894695ede04a3e0b99e00ad20eba036285d44036e0e0c2156 (image=quay.io/ceph/ceph:v20, name=reverent_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:11:47 compute-0 systemd[1]: Started libpod-conmon-68db780e53f6afa894695ede04a3e0b99e00ad20eba036285d44036e0e0c2156.scope.
Jan 29 09:11:47 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f683e0d3ae2309f871e48fb994fc3fe1bf59646c40eaead236be0867d238249/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f683e0d3ae2309f871e48fb994fc3fe1bf59646c40eaead236be0867d238249/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f683e0d3ae2309f871e48fb994fc3fe1bf59646c40eaead236be0867d238249/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f683e0d3ae2309f871e48fb994fc3fe1bf59646c40eaead236be0867d238249/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:47 compute-0 podman[75800]: 2026-01-29 09:11:47.756208114 +0000 UTC m=+0.022209117 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:47 compute-0 podman[75800]: 2026-01-29 09:11:47.858830083 +0000 UTC m=+0.124831086 container init 68db780e53f6afa894695ede04a3e0b99e00ad20eba036285d44036e0e0c2156 (image=quay.io/ceph/ceph:v20, name=reverent_liskov, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:11:47 compute-0 podman[75800]: 2026-01-29 09:11:47.862971324 +0000 UTC m=+0.128972307 container start 68db780e53f6afa894695ede04a3e0b99e00ad20eba036285d44036e0e0c2156 (image=quay.io/ceph/ceph:v20, name=reverent_liskov, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 09:11:47 compute-0 podman[75800]: 2026-01-29 09:11:47.866588721 +0000 UTC m=+0.132589704 container attach 68db780e53f6afa894695ede04a3e0b99e00ad20eba036285d44036e0e0c2156 (image=quay.io/ceph/ceph:v20, name=reverent_liskov, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:11:47 compute-0 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 09:11:47 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:11:47 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.ucpkkb(active, since 2s)
Jan 29 09:11:47 compute-0 ceph-mon[75183]: mgrmap e3: compute-0.ucpkkb(active, since 1.0416s)
Jan 29 09:11:47 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3936971962' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 29 09:11:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 29 09:11:48 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2930862372' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 29 09:11:48 compute-0 reverent_liskov[75817]: 
Jan 29 09:11:48 compute-0 reverent_liskov[75817]: [global]
Jan 29 09:11:48 compute-0 reverent_liskov[75817]:         fsid = 3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:11:48 compute-0 reverent_liskov[75817]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 29 09:11:48 compute-0 reverent_liskov[75817]:         osd_crush_chooseleaf_type = 0
Jan 29 09:11:48 compute-0 systemd[1]: libpod-68db780e53f6afa894695ede04a3e0b99e00ad20eba036285d44036e0e0c2156.scope: Deactivated successfully.
Jan 29 09:11:48 compute-0 podman[75843]: 2026-01-29 09:11:48.30957924 +0000 UTC m=+0.022948458 container died 68db780e53f6afa894695ede04a3e0b99e00ad20eba036285d44036e0e0c2156 (image=quay.io/ceph/ceph:v20, name=reverent_liskov, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:11:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f683e0d3ae2309f871e48fb994fc3fe1bf59646c40eaead236be0867d238249-merged.mount: Deactivated successfully.
Jan 29 09:11:48 compute-0 podman[75843]: 2026-01-29 09:11:48.348233679 +0000 UTC m=+0.061602877 container remove 68db780e53f6afa894695ede04a3e0b99e00ad20eba036285d44036e0e0c2156 (image=quay.io/ceph/ceph:v20, name=reverent_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 09:11:48 compute-0 systemd[1]: libpod-conmon-68db780e53f6afa894695ede04a3e0b99e00ad20eba036285d44036e0e0c2156.scope: Deactivated successfully.
Jan 29 09:11:48 compute-0 podman[75858]: 2026-01-29 09:11:48.410085912 +0000 UTC m=+0.041784694 container create 1f9c2f341dcb2bb51deb33bf758a544d578982a684598c184453bfa4da809285 (image=quay.io/ceph/ceph:v20, name=admiring_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 09:11:48 compute-0 systemd[1]: Started libpod-conmon-1f9c2f341dcb2bb51deb33bf758a544d578982a684598c184453bfa4da809285.scope.
Jan 29 09:11:48 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ea335a0c0408a77dc2a36fc1afc83b6907ca72a848fece2f7f6ed16ef40139f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ea335a0c0408a77dc2a36fc1afc83b6907ca72a848fece2f7f6ed16ef40139f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ea335a0c0408a77dc2a36fc1afc83b6907ca72a848fece2f7f6ed16ef40139f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:48 compute-0 podman[75858]: 2026-01-29 09:11:48.474511224 +0000 UTC m=+0.106210056 container init 1f9c2f341dcb2bb51deb33bf758a544d578982a684598c184453bfa4da809285 (image=quay.io/ceph/ceph:v20, name=admiring_feistel, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 29 09:11:48 compute-0 podman[75858]: 2026-01-29 09:11:48.479552299 +0000 UTC m=+0.111251071 container start 1f9c2f341dcb2bb51deb33bf758a544d578982a684598c184453bfa4da809285 (image=quay.io/ceph/ceph:v20, name=admiring_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:11:48 compute-0 podman[75858]: 2026-01-29 09:11:48.482818947 +0000 UTC m=+0.114517739 container attach 1f9c2f341dcb2bb51deb33bf758a544d578982a684598c184453bfa4da809285 (image=quay.io/ceph/ceph:v20, name=admiring_feistel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:11:48 compute-0 podman[75858]: 2026-01-29 09:11:48.391019299 +0000 UTC m=+0.022718111 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Jan 29 09:11:48 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2127147710' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 29 09:11:48 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2127147710' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 29 09:11:48 compute-0 ceph-mgr[75473]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 29 09:11:48 compute-0 ceph-mgr[75473]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 29 09:11:48 compute-0 ceph-mgr[75473]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 29 09:11:48 compute-0 ceph-mgr[75473]: mgr respawn  1: '-n'
Jan 29 09:11:48 compute-0 ceph-mgr[75473]: mgr respawn  2: 'mgr.compute-0.ucpkkb'
Jan 29 09:11:48 compute-0 ceph-mgr[75473]: mgr respawn  3: '-f'
Jan 29 09:11:48 compute-0 ceph-mgr[75473]: mgr respawn  4: '--setuser'
Jan 29 09:11:48 compute-0 ceph-mgr[75473]: mgr respawn  5: 'ceph'
Jan 29 09:11:48 compute-0 ceph-mgr[75473]: mgr respawn  6: '--setgroup'
Jan 29 09:11:48 compute-0 ceph-mgr[75473]: mgr respawn  7: 'ceph'
Jan 29 09:11:48 compute-0 ceph-mgr[75473]: mgr respawn  8: '--default-log-to-file=false'
Jan 29 09:11:48 compute-0 ceph-mgr[75473]: mgr respawn  9: '--default-log-to-journald=true'
Jan 29 09:11:48 compute-0 ceph-mgr[75473]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 29 09:11:48 compute-0 ceph-mgr[75473]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 29 09:11:48 compute-0 ceph-mgr[75473]: mgr respawn  exe_path /proc/self/exe
Jan 29 09:11:48 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.ucpkkb(active, since 3s)
Jan 29 09:11:49 compute-0 ceph-mon[75183]: mgrmap e4: compute-0.ucpkkb(active, since 2s)
Jan 29 09:11:49 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2930862372' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 29 09:11:49 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2127147710' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 29 09:11:49 compute-0 systemd[1]: libpod-1f9c2f341dcb2bb51deb33bf758a544d578982a684598c184453bfa4da809285.scope: Deactivated successfully.
Jan 29 09:11:49 compute-0 podman[75858]: 2026-01-29 09:11:49.011851609 +0000 UTC m=+0.643550421 container died 1f9c2f341dcb2bb51deb33bf758a544d578982a684598c184453bfa4da809285 (image=quay.io/ceph/ceph:v20, name=admiring_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:11:49 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-ucpkkb[75469]: ignoring --setuser ceph since I am not root
Jan 29 09:11:49 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-ucpkkb[75469]: ignoring --setgroup ceph since I am not root
Jan 29 09:11:49 compute-0 ceph-mgr[75473]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 29 09:11:49 compute-0 ceph-mgr[75473]: pidfile_write: ignore empty --pid-file
Jan 29 09:11:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ea335a0c0408a77dc2a36fc1afc83b6907ca72a848fece2f7f6ed16ef40139f-merged.mount: Deactivated successfully.
Jan 29 09:11:49 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'alerts'
Jan 29 09:11:49 compute-0 podman[75858]: 2026-01-29 09:11:49.104416708 +0000 UTC m=+0.736115480 container remove 1f9c2f341dcb2bb51deb33bf758a544d578982a684598c184453bfa4da809285 (image=quay.io/ceph/ceph:v20, name=admiring_feistel, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 29 09:11:49 compute-0 systemd[1]: libpod-conmon-1f9c2f341dcb2bb51deb33bf758a544d578982a684598c184453bfa4da809285.scope: Deactivated successfully.
Jan 29 09:11:49 compute-0 podman[75934]: 2026-01-29 09:11:49.163979119 +0000 UTC m=+0.044514558 container create 19889b0598bd9fce4082375eb4ffde83859e53b4ae8bb782899f6a5e677e2145 (image=quay.io/ceph/ceph:v20, name=brave_sutherland, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:11:49 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'balancer'
Jan 29 09:11:49 compute-0 systemd[1]: Started libpod-conmon-19889b0598bd9fce4082375eb4ffde83859e53b4ae8bb782899f6a5e677e2145.scope.
Jan 29 09:11:49 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c296fd9892433e74cac767b1015528580d734e285ebe7e20a225ef5434abe0b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c296fd9892433e74cac767b1015528580d734e285ebe7e20a225ef5434abe0b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c296fd9892433e74cac767b1015528580d734e285ebe7e20a225ef5434abe0b4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:49 compute-0 podman[75934]: 2026-01-29 09:11:49.227184738 +0000 UTC m=+0.107720197 container init 19889b0598bd9fce4082375eb4ffde83859e53b4ae8bb782899f6a5e677e2145 (image=quay.io/ceph/ceph:v20, name=brave_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 09:11:49 compute-0 podman[75934]: 2026-01-29 09:11:49.231531895 +0000 UTC m=+0.112067324 container start 19889b0598bd9fce4082375eb4ffde83859e53b4ae8bb782899f6a5e677e2145 (image=quay.io/ceph/ceph:v20, name=brave_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:11:49 compute-0 podman[75934]: 2026-01-29 09:11:49.235154052 +0000 UTC m=+0.115689521 container attach 19889b0598bd9fce4082375eb4ffde83859e53b4ae8bb782899f6a5e677e2145 (image=quay.io/ceph/ceph:v20, name=brave_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 09:11:49 compute-0 podman[75934]: 2026-01-29 09:11:49.14505095 +0000 UTC m=+0.025586449 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:49 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'cephadm'
Jan 29 09:11:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 29 09:11:49 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4133913014' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 29 09:11:49 compute-0 brave_sutherland[75950]: {
Jan 29 09:11:49 compute-0 brave_sutherland[75950]:     "epoch": 5,
Jan 29 09:11:49 compute-0 brave_sutherland[75950]:     "available": true,
Jan 29 09:11:49 compute-0 brave_sutherland[75950]:     "active_name": "compute-0.ucpkkb",
Jan 29 09:11:49 compute-0 brave_sutherland[75950]:     "num_standby": 0
Jan 29 09:11:49 compute-0 brave_sutherland[75950]: }
Jan 29 09:11:49 compute-0 systemd[1]: libpod-19889b0598bd9fce4082375eb4ffde83859e53b4ae8bb782899f6a5e677e2145.scope: Deactivated successfully.
Jan 29 09:11:49 compute-0 podman[75934]: 2026-01-29 09:11:49.740606101 +0000 UTC m=+0.621141540 container died 19889b0598bd9fce4082375eb4ffde83859e53b4ae8bb782899f6a5e677e2145 (image=quay.io/ceph/ceph:v20, name=brave_sutherland, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:11:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-c296fd9892433e74cac767b1015528580d734e285ebe7e20a225ef5434abe0b4-merged.mount: Deactivated successfully.
Jan 29 09:11:49 compute-0 podman[75934]: 2026-01-29 09:11:49.774655526 +0000 UTC m=+0.655190965 container remove 19889b0598bd9fce4082375eb4ffde83859e53b4ae8bb782899f6a5e677e2145 (image=quay.io/ceph/ceph:v20, name=brave_sutherland, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 29 09:11:49 compute-0 systemd[1]: libpod-conmon-19889b0598bd9fce4082375eb4ffde83859e53b4ae8bb782899f6a5e677e2145.scope: Deactivated successfully.
Jan 29 09:11:49 compute-0 podman[76001]: 2026-01-29 09:11:49.831919676 +0000 UTC m=+0.039944105 container create 98eaa169e6a333b11861dcdacd13a766ce9e2a077d25b7e94cb39a85e6cf365c (image=quay.io/ceph/ceph:v20, name=inspiring_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:11:49 compute-0 systemd[1]: Started libpod-conmon-98eaa169e6a333b11861dcdacd13a766ce9e2a077d25b7e94cb39a85e6cf365c.scope.
Jan 29 09:11:49 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59a0d2bf90e3e0937e4d3ab5fbb70ba4c9b3338c581712e633f2527808f8a144/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59a0d2bf90e3e0937e4d3ab5fbb70ba4c9b3338c581712e633f2527808f8a144/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59a0d2bf90e3e0937e4d3ab5fbb70ba4c9b3338c581712e633f2527808f8a144/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:49 compute-0 podman[76001]: 2026-01-29 09:11:49.811532598 +0000 UTC m=+0.019557057 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:49 compute-0 podman[76001]: 2026-01-29 09:11:49.914749302 +0000 UTC m=+0.122773751 container init 98eaa169e6a333b11861dcdacd13a766ce9e2a077d25b7e94cb39a85e6cf365c (image=quay.io/ceph/ceph:v20, name=inspiring_jemison, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 29 09:11:49 compute-0 podman[76001]: 2026-01-29 09:11:49.923028365 +0000 UTC m=+0.131052794 container start 98eaa169e6a333b11861dcdacd13a766ce9e2a077d25b7e94cb39a85e6cf365c (image=quay.io/ceph/ceph:v20, name=inspiring_jemison, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:11:49 compute-0 podman[76001]: 2026-01-29 09:11:49.927145166 +0000 UTC m=+0.135169615 container attach 98eaa169e6a333b11861dcdacd13a766ce9e2a077d25b7e94cb39a85e6cf365c (image=quay.io/ceph/ceph:v20, name=inspiring_jemison, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:11:50 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2127147710' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 29 09:11:50 compute-0 ceph-mon[75183]: mgrmap e5: compute-0.ucpkkb(active, since 3s)
Jan 29 09:11:50 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/4133913014' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 29 09:11:50 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'crash'
Jan 29 09:11:50 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'dashboard'
Jan 29 09:11:50 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'devicehealth'
Jan 29 09:11:51 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'diskprediction_local'
Jan 29 09:11:51 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-ucpkkb[75469]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 29 09:11:51 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-ucpkkb[75469]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 29 09:11:51 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-ucpkkb[75469]:   from numpy import show_config as show_numpy_config
Jan 29 09:11:51 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'influx'
Jan 29 09:11:51 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'insights'
Jan 29 09:11:51 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'iostat'
Jan 29 09:11:51 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'k8sevents'
Jan 29 09:11:51 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'localpool'
Jan 29 09:11:51 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'mds_autoscaler'
Jan 29 09:11:52 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'mirroring'
Jan 29 09:11:52 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'nfs'
Jan 29 09:11:52 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'orchestrator'
Jan 29 09:11:52 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'osd_perf_query'
Jan 29 09:11:53 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'osd_support'
Jan 29 09:11:53 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'pg_autoscaler'
Jan 29 09:11:53 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'progress'
Jan 29 09:11:53 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'prometheus'
Jan 29 09:11:53 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'rbd_support'
Jan 29 09:11:53 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'rgw'
Jan 29 09:11:53 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'rook'
Jan 29 09:11:54 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'selftest'
Jan 29 09:11:54 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'smb'
Jan 29 09:11:54 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'snap_schedule'
Jan 29 09:11:54 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'stats'
Jan 29 09:11:55 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'status'
Jan 29 09:11:55 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'telegraf'
Jan 29 09:11:55 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'telemetry'
Jan 29 09:11:55 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'test_orchestrator'
Jan 29 09:11:55 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'volumes'
Jan 29 09:11:55 compute-0 ceph-mon[75183]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ucpkkb restarted
Jan 29 09:11:55 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 29 09:11:55 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 09:11:55 compute-0 ceph-mon[75183]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ucpkkb
Jan 29 09:11:55 compute-0 ceph-mgr[75473]: ms_deliver_dispatch: unhandled message 0x558c5db40000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 29 09:11:55 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Jan 29 09:11:55 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 29 09:11:55 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 29 09:11:55 compute-0 ceph-mgr[75473]: mgr handle_mgr_map Activating!
Jan 29 09:11:55 compute-0 ceph-mgr[75473]: mgr handle_mgr_map I am now activating
Jan 29 09:11:55 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 29 09:11:55 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.ucpkkb(active, starting, since 0.0683736s)
Jan 29 09:11:55 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 29 09:11:55 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 29 09:11:55 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ucpkkb", "id": "compute-0.ucpkkb"} v 0)
Jan 29 09:11:55 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mgr metadata", "who": "compute-0.ucpkkb", "id": "compute-0.ucpkkb"} : dispatch
Jan 29 09:11:55 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 29 09:11:55 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mds metadata"} : dispatch
Jan 29 09:11:55 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).mds e1 all = 1
Jan 29 09:11:55 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 29 09:11:55 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata"} : dispatch
Jan 29 09:11:55 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 29 09:11:55 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mon metadata"} : dispatch
Jan 29 09:11:55 compute-0 ceph-mgr[75473]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:55 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: balancer
Jan 29 09:11:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Starting
Jan 29 09:11:55 compute-0 ceph-mon[75183]: log_channel(cluster) log [INF] : Manager daemon compute-0.ucpkkb is now available
Jan 29 09:11:55 compute-0 ceph-mgr[75473]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:11:55
Jan 29 09:11:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:11:55 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:11:55 compute-0 ceph-mgr[75473]: [balancer INFO root] No pools available
Jan 29 09:11:55 compute-0 ceph-mon[75183]: Active manager daemon compute-0.ucpkkb restarted
Jan 29 09:11:55 compute-0 ceph-mon[75183]: Activating manager daemon compute-0.ucpkkb
Jan 29 09:11:55 compute-0 ceph-mon[75183]: osdmap e2: 0 total, 0 up, 0 in
Jan 29 09:11:55 compute-0 ceph-mon[75183]: mgrmap e6: compute-0.ucpkkb(active, starting, since 0.0683736s)
Jan 29 09:11:55 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 29 09:11:55 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mgr metadata", "who": "compute-0.ucpkkb", "id": "compute-0.ucpkkb"} : dispatch
Jan 29 09:11:55 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mds metadata"} : dispatch
Jan 29 09:11:55 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata"} : dispatch
Jan 29 09:11:55 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mon metadata"} : dispatch
Jan 29 09:11:55 compute-0 ceph-mon[75183]: Manager daemon compute-0.ucpkkb is now available
Jan 29 09:11:56 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Jan 29 09:11:56 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:11:56 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Jan 29 09:11:56 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 29 09:11:56 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Jan 29 09:11:56 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:11:56 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Jan 29 09:11:56 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: cephadm
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: crash
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: devicehealth
Jan 29 09:11:56 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 29 09:11:56 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: iostat
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [devicehealth INFO root] Starting
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: nfs
Jan 29 09:11:56 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 29 09:11:56 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: orchestrator
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: pg_autoscaler
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: progress
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [progress INFO root] Loading...
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [progress INFO root] No stored events to load
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [progress INFO root] Loaded [] historic events
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [progress INFO root] Loaded OSDMap, ready.
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] recovery thread starting
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] starting setup
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: rbd_support
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: status
Jan 29 09:11:56 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ucpkkb/mirror_snapshot_schedule"} v 0)
Jan 29 09:11:56 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ucpkkb/mirror_snapshot_schedule"} : dispatch
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: telemetry
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] PerfHandler: starting
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TaskHandler: starting
Jan 29 09:11:56 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ucpkkb/trash_purge_schedule"} v 0)
Jan 29 09:11:56 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ucpkkb/trash_purge_schedule"} : dispatch
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] setup complete
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: volumes
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 29 09:11:56 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.ucpkkb(active, since 1.08023s)
Jan 29 09:11:56 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 29 09:11:56 compute-0 inspiring_jemison[76018]: {
Jan 29 09:11:56 compute-0 inspiring_jemison[76018]:     "mgrmap_epoch": 7,
Jan 29 09:11:56 compute-0 inspiring_jemison[76018]:     "initialized": true
Jan 29 09:11:56 compute-0 inspiring_jemison[76018]: }
Jan 29 09:11:56 compute-0 systemd[1]: libpod-98eaa169e6a333b11861dcdacd13a766ce9e2a077d25b7e94cb39a85e6cf365c.scope: Deactivated successfully.
Jan 29 09:11:56 compute-0 podman[76001]: 2026-01-29 09:11:56.977523569 +0000 UTC m=+7.185548008 container died 98eaa169e6a333b11861dcdacd13a766ce9e2a077d25b7e94cb39a85e6cf365c (image=quay.io/ceph/ceph:v20, name=inspiring_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 09:11:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-59a0d2bf90e3e0937e4d3ab5fbb70ba4c9b3338c581712e633f2527808f8a144-merged.mount: Deactivated successfully.
Jan 29 09:11:57 compute-0 podman[76001]: 2026-01-29 09:11:57.056181353 +0000 UTC m=+7.264205772 container remove 98eaa169e6a333b11861dcdacd13a766ce9e2a077d25b7e94cb39a85e6cf365c (image=quay.io/ceph/ceph:v20, name=inspiring_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 09:11:57 compute-0 systemd[1]: libpod-conmon-98eaa169e6a333b11861dcdacd13a766ce9e2a077d25b7e94cb39a85e6cf365c.scope: Deactivated successfully.
Jan 29 09:11:57 compute-0 podman[76169]: 2026-01-29 09:11:57.140343946 +0000 UTC m=+0.065282516 container create 59b10e8c517ff6448afe715237ef564daa0b813689b6e5f1eef618372de0c1dd (image=quay.io/ceph/ceph:v20, name=angry_napier, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 29 09:11:57 compute-0 systemd[1]: Started libpod-conmon-59b10e8c517ff6448afe715237ef564daa0b813689b6e5f1eef618372de0c1dd.scope.
Jan 29 09:11:57 compute-0 podman[76169]: 2026-01-29 09:11:57.098676725 +0000 UTC m=+0.023615315 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:57 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91fb53a5e29bed6106d892ae8700b030a95bc7d820cfc9452be9857303197550/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91fb53a5e29bed6106d892ae8700b030a95bc7d820cfc9452be9857303197550/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91fb53a5e29bed6106d892ae8700b030a95bc7d820cfc9452be9857303197550/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:57 compute-0 podman[76169]: 2026-01-29 09:11:57.27181965 +0000 UTC m=+0.196758250 container init 59b10e8c517ff6448afe715237ef564daa0b813689b6e5f1eef618372de0c1dd (image=quay.io/ceph/ceph:v20, name=angry_napier, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 09:11:57 compute-0 podman[76169]: 2026-01-29 09:11:57.279942218 +0000 UTC m=+0.204880788 container start 59b10e8c517ff6448afe715237ef564daa0b813689b6e5f1eef618372de0c1dd (image=quay.io/ceph/ceph:v20, name=angry_napier, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 09:11:57 compute-0 podman[76169]: 2026-01-29 09:11:57.286006741 +0000 UTC m=+0.210945331 container attach 59b10e8c517ff6448afe715237ef564daa0b813689b6e5f1eef618372de0c1dd (image=quay.io/ceph/ceph:v20, name=angry_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:11:57 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:11:57 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:11:57 compute-0 ceph-mon[75183]: Found migration_current of "None". Setting to last migration.
Jan 29 09:11:57 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:11:57 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:11:57 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 29 09:11:57 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 29 09:11:57 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ucpkkb/mirror_snapshot_schedule"} : dispatch
Jan 29 09:11:57 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ucpkkb/trash_purge_schedule"} : dispatch
Jan 29 09:11:57 compute-0 ceph-mon[75183]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 29 09:11:57 compute-0 ceph-mon[75183]: mgrmap e7: compute-0.ucpkkb(active, since 1.08023s)
Jan 29 09:11:57 compute-0 ceph-mon[75183]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 29 09:11:57 compute-0 ceph-mgr[75473]: [cephadm INFO cherrypy.error] [29/Jan/2026:09:11:57] ENGINE Bus STARTING
Jan 29 09:11:57 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : [29/Jan/2026:09:11:57] ENGINE Bus STARTING
Jan 29 09:11:57 compute-0 ceph-mgr[75473]: [cephadm INFO cherrypy.error] [29/Jan/2026:09:11:57] ENGINE Serving on https://192.168.122.100:7150
Jan 29 09:11:57 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : [29/Jan/2026:09:11:57] ENGINE Serving on https://192.168.122.100:7150
Jan 29 09:11:57 compute-0 ceph-mgr[75473]: [cephadm INFO cherrypy.error] [29/Jan/2026:09:11:57] ENGINE Client ('192.168.122.100', 48324) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 29 09:11:57 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : [29/Jan/2026:09:11:57] ENGINE Client ('192.168.122.100', 48324) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 29 09:11:57 compute-0 ceph-mgr[75473]: [cephadm INFO cherrypy.error] [29/Jan/2026:09:11:57] ENGINE Serving on http://192.168.122.100:8765
Jan 29 09:11:57 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : [29/Jan/2026:09:11:57] ENGINE Serving on http://192.168.122.100:8765
Jan 29 09:11:57 compute-0 ceph-mgr[75473]: [cephadm INFO cherrypy.error] [29/Jan/2026:09:11:57] ENGINE Bus STARTED
Jan 29 09:11:57 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : [29/Jan/2026:09:11:57] ENGINE Bus STARTED
Jan 29 09:11:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 29 09:11:57 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 29 09:11:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Jan 29 09:11:57 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3388585698' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 29 09:11:57 compute-0 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 09:11:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019905396 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:11:58 compute-0 ceph-mon[75183]: [29/Jan/2026:09:11:57] ENGINE Bus STARTING
Jan 29 09:11:58 compute-0 ceph-mon[75183]: [29/Jan/2026:09:11:57] ENGINE Serving on https://192.168.122.100:7150
Jan 29 09:11:58 compute-0 ceph-mon[75183]: [29/Jan/2026:09:11:57] ENGINE Client ('192.168.122.100', 48324) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 29 09:11:58 compute-0 ceph-mon[75183]: [29/Jan/2026:09:11:57] ENGINE Serving on http://192.168.122.100:8765
Jan 29 09:11:58 compute-0 ceph-mon[75183]: [29/Jan/2026:09:11:57] ENGINE Bus STARTED
Jan 29 09:11:58 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 29 09:11:58 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3388585698' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 29 09:11:58 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3388585698' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 29 09:11:58 compute-0 angry_napier[76187]: module 'orchestrator' is already enabled (always-on)
Jan 29 09:11:58 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.ucpkkb(active, since 2s)
Jan 29 09:11:58 compute-0 systemd[1]: libpod-59b10e8c517ff6448afe715237ef564daa0b813689b6e5f1eef618372de0c1dd.scope: Deactivated successfully.
Jan 29 09:11:58 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:11:58 compute-0 podman[76169]: 2026-01-29 09:11:58.525959534 +0000 UTC m=+1.450898114 container died 59b10e8c517ff6448afe715237ef564daa0b813689b6e5f1eef618372de0c1dd (image=quay.io/ceph/ceph:v20, name=angry_napier, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:11:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-91fb53a5e29bed6106d892ae8700b030a95bc7d820cfc9452be9857303197550-merged.mount: Deactivated successfully.
Jan 29 09:11:58 compute-0 podman[76169]: 2026-01-29 09:11:58.567087829 +0000 UTC m=+1.492026399 container remove 59b10e8c517ff6448afe715237ef564daa0b813689b6e5f1eef618372de0c1dd (image=quay.io/ceph/ceph:v20, name=angry_napier, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 29 09:11:58 compute-0 systemd[1]: libpod-conmon-59b10e8c517ff6448afe715237ef564daa0b813689b6e5f1eef618372de0c1dd.scope: Deactivated successfully.
Jan 29 09:11:58 compute-0 podman[76249]: 2026-01-29 09:11:58.618585634 +0000 UTC m=+0.035475255 container create 6e205f955ca7931c998a5074ebae7ffa7c2e01d451490d20f946540fe2910579 (image=quay.io/ceph/ceph:v20, name=peaceful_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:11:58 compute-0 systemd[1]: Started libpod-conmon-6e205f955ca7931c998a5074ebae7ffa7c2e01d451490d20f946540fe2910579.scope.
Jan 29 09:11:58 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab6b478dbd95e6cf1eba505f96867cfde59368af9cf0e3b4cfdc21cac5c1a51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab6b478dbd95e6cf1eba505f96867cfde59368af9cf0e3b4cfdc21cac5c1a51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab6b478dbd95e6cf1eba505f96867cfde59368af9cf0e3b4cfdc21cac5c1a51/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:58 compute-0 podman[76249]: 2026-01-29 09:11:58.684828665 +0000 UTC m=+0.101718296 container init 6e205f955ca7931c998a5074ebae7ffa7c2e01d451490d20f946540fe2910579 (image=quay.io/ceph/ceph:v20, name=peaceful_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:11:58 compute-0 podman[76249]: 2026-01-29 09:11:58.689785968 +0000 UTC m=+0.106675589 container start 6e205f955ca7931c998a5074ebae7ffa7c2e01d451490d20f946540fe2910579 (image=quay.io/ceph/ceph:v20, name=peaceful_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 29 09:11:58 compute-0 podman[76249]: 2026-01-29 09:11:58.694479124 +0000 UTC m=+0.111368745 container attach 6e205f955ca7931c998a5074ebae7ffa7c2e01d451490d20f946540fe2910579 (image=quay.io/ceph/ceph:v20, name=peaceful_mclaren, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 29 09:11:58 compute-0 podman[76249]: 2026-01-29 09:11:58.603502658 +0000 UTC m=+0.020392299 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:59 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:11:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Jan 29 09:11:59 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:11:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 29 09:11:59 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 29 09:11:59 compute-0 systemd[1]: libpod-6e205f955ca7931c998a5074ebae7ffa7c2e01d451490d20f946540fe2910579.scope: Deactivated successfully.
Jan 29 09:11:59 compute-0 podman[76249]: 2026-01-29 09:11:59.241540861 +0000 UTC m=+0.658430482 container died 6e205f955ca7931c998a5074ebae7ffa7c2e01d451490d20f946540fe2910579 (image=quay.io/ceph/ceph:v20, name=peaceful_mclaren, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 29 09:11:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ab6b478dbd95e6cf1eba505f96867cfde59368af9cf0e3b4cfdc21cac5c1a51-merged.mount: Deactivated successfully.
Jan 29 09:11:59 compute-0 podman[76249]: 2026-01-29 09:11:59.278876694 +0000 UTC m=+0.695766355 container remove 6e205f955ca7931c998a5074ebae7ffa7c2e01d451490d20f946540fe2910579 (image=quay.io/ceph/ceph:v20, name=peaceful_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 09:11:59 compute-0 systemd[1]: libpod-conmon-6e205f955ca7931c998a5074ebae7ffa7c2e01d451490d20f946540fe2910579.scope: Deactivated successfully.
Jan 29 09:11:59 compute-0 podman[76304]: 2026-01-29 09:11:59.333285437 +0000 UTC m=+0.038345752 container create 745d44ad7c6dbaeebf311492149c1d544294504746bf5fff2a1f2240823ebb9b (image=quay.io/ceph/ceph:v20, name=hopeful_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:11:59 compute-0 systemd[1]: Started libpod-conmon-745d44ad7c6dbaeebf311492149c1d544294504746bf5fff2a1f2240823ebb9b.scope.
Jan 29 09:11:59 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/898df844dc1588ade5d19c7da513e8166592435a4e417025f71daf2b36b5793b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/898df844dc1588ade5d19c7da513e8166592435a4e417025f71daf2b36b5793b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/898df844dc1588ade5d19c7da513e8166592435a4e417025f71daf2b36b5793b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:59 compute-0 podman[76304]: 2026-01-29 09:11:59.390959147 +0000 UTC m=+0.096019142 container init 745d44ad7c6dbaeebf311492149c1d544294504746bf5fff2a1f2240823ebb9b (image=quay.io/ceph/ceph:v20, name=hopeful_shaw, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 29 09:11:59 compute-0 podman[76304]: 2026-01-29 09:11:59.396355732 +0000 UTC m=+0.101415717 container start 745d44ad7c6dbaeebf311492149c1d544294504746bf5fff2a1f2240823ebb9b (image=quay.io/ceph/ceph:v20, name=hopeful_shaw, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 09:11:59 compute-0 podman[76304]: 2026-01-29 09:11:59.399964689 +0000 UTC m=+0.105024684 container attach 745d44ad7c6dbaeebf311492149c1d544294504746bf5fff2a1f2240823ebb9b (image=quay.io/ceph/ceph:v20, name=hopeful_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:11:59 compute-0 podman[76304]: 2026-01-29 09:11:59.316820124 +0000 UTC m=+0.021880129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:11:59 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3388585698' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 29 09:11:59 compute-0 ceph-mon[75183]: mgrmap e8: compute-0.ucpkkb(active, since 2s)
Jan 29 09:11:59 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:11:59 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 29 09:11:59 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:11:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Jan 29 09:11:59 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:11:59 compute-0 ceph-mgr[75473]: [cephadm INFO root] Set ssh ssh_user
Jan 29 09:11:59 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 29 09:11:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Jan 29 09:11:59 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:11:59 compute-0 ceph-mgr[75473]: [cephadm INFO root] Set ssh ssh_config
Jan 29 09:11:59 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 29 09:11:59 compute-0 ceph-mgr[75473]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 29 09:11:59 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 29 09:11:59 compute-0 hopeful_shaw[76321]: ssh user set to ceph-admin. sudo will be used
Jan 29 09:11:59 compute-0 systemd[1]: libpod-745d44ad7c6dbaeebf311492149c1d544294504746bf5fff2a1f2240823ebb9b.scope: Deactivated successfully.
Jan 29 09:11:59 compute-0 podman[76304]: 2026-01-29 09:11:59.830194985 +0000 UTC m=+0.535254970 container died 745d44ad7c6dbaeebf311492149c1d544294504746bf5fff2a1f2240823ebb9b (image=quay.io/ceph/ceph:v20, name=hopeful_shaw, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 09:11:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-898df844dc1588ade5d19c7da513e8166592435a4e417025f71daf2b36b5793b-merged.mount: Deactivated successfully.
Jan 29 09:11:59 compute-0 podman[76304]: 2026-01-29 09:11:59.867476758 +0000 UTC m=+0.572536753 container remove 745d44ad7c6dbaeebf311492149c1d544294504746bf5fff2a1f2240823ebb9b (image=quay.io/ceph/ceph:v20, name=hopeful_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 29 09:11:59 compute-0 systemd[1]: libpod-conmon-745d44ad7c6dbaeebf311492149c1d544294504746bf5fff2a1f2240823ebb9b.scope: Deactivated successfully.
Jan 29 09:11:59 compute-0 podman[76358]: 2026-01-29 09:11:59.922446985 +0000 UTC m=+0.038096115 container create 43734cfe43f426d77a3f17e8b19062c170dcbaab0cfbf424f7e3a285dea5ab98 (image=quay.io/ceph/ceph:v20, name=cranky_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:11:59 compute-0 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 09:11:59 compute-0 systemd[1]: Started libpod-conmon-43734cfe43f426d77a3f17e8b19062c170dcbaab0cfbf424f7e3a285dea5ab98.scope.
Jan 29 09:11:59 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:11:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/811f4e351f69dc1ad5dd603c2fe8c9f399ba81447647e4b221a251e25320af8c/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/811f4e351f69dc1ad5dd603c2fe8c9f399ba81447647e4b221a251e25320af8c/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/811f4e351f69dc1ad5dd603c2fe8c9f399ba81447647e4b221a251e25320af8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/811f4e351f69dc1ad5dd603c2fe8c9f399ba81447647e4b221a251e25320af8c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/811f4e351f69dc1ad5dd603c2fe8c9f399ba81447647e4b221a251e25320af8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:11:59 compute-0 podman[76358]: 2026-01-29 09:11:59.983854936 +0000 UTC m=+0.099504096 container init 43734cfe43f426d77a3f17e8b19062c170dcbaab0cfbf424f7e3a285dea5ab98 (image=quay.io/ceph/ceph:v20, name=cranky_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Jan 29 09:11:59 compute-0 podman[76358]: 2026-01-29 09:11:59.988510761 +0000 UTC m=+0.104159891 container start 43734cfe43f426d77a3f17e8b19062c170dcbaab0cfbf424f7e3a285dea5ab98 (image=quay.io/ceph/ceph:v20, name=cranky_lamport, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 29 09:11:59 compute-0 podman[76358]: 2026-01-29 09:11:59.99219568 +0000 UTC m=+0.107844810 container attach 43734cfe43f426d77a3f17e8b19062c170dcbaab0cfbf424f7e3a285dea5ab98 (image=quay.io/ceph/ceph:v20, name=cranky_lamport, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:12:00 compute-0 podman[76358]: 2026-01-29 09:11:59.905049588 +0000 UTC m=+0.020698738 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:00 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:00 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Jan 29 09:12:00 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:01 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:01 compute-0 ceph-mgr[75473]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 29 09:12:01 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 29 09:12:01 compute-0 ceph-mgr[75473]: [cephadm INFO root] Set ssh private key
Jan 29 09:12:01 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 29 09:12:01 compute-0 systemd[1]: libpod-43734cfe43f426d77a3f17e8b19062c170dcbaab0cfbf424f7e3a285dea5ab98.scope: Deactivated successfully.
Jan 29 09:12:01 compute-0 podman[76401]: 2026-01-29 09:12:01.289285988 +0000 UTC m=+0.022003955 container died 43734cfe43f426d77a3f17e8b19062c170dcbaab0cfbf424f7e3a285dea5ab98 (image=quay.io/ceph/ceph:v20, name=cranky_lamport, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Jan 29 09:12:01 compute-0 ceph-mon[75183]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:01 compute-0 ceph-mon[75183]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:01 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:01 compute-0 ceph-mon[75183]: Set ssh ssh_user
Jan 29 09:12:01 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:01 compute-0 ceph-mon[75183]: Set ssh ssh_config
Jan 29 09:12:01 compute-0 ceph-mon[75183]: ssh user set to ceph-admin. sudo will be used
Jan 29 09:12:01 compute-0 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 09:12:02 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-811f4e351f69dc1ad5dd603c2fe8c9f399ba81447647e4b221a251e25320af8c-merged.mount: Deactivated successfully.
Jan 29 09:12:02 compute-0 ceph-mon[75183]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:02 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:02 compute-0 ceph-mon[75183]: Set ssh ssh_identity_key
Jan 29 09:12:02 compute-0 ceph-mon[75183]: Set ssh private key
Jan 29 09:12:02 compute-0 podman[76401]: 2026-01-29 09:12:02.948427695 +0000 UTC m=+1.681145632 container remove 43734cfe43f426d77a3f17e8b19062c170dcbaab0cfbf424f7e3a285dea5ab98 (image=quay.io/ceph/ceph:v20, name=cranky_lamport, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:12:02 compute-0 systemd[1]: libpod-conmon-43734cfe43f426d77a3f17e8b19062c170dcbaab0cfbf424f7e3a285dea5ab98.scope: Deactivated successfully.
Jan 29 09:12:03 compute-0 podman[76416]: 2026-01-29 09:12:03.027069898 +0000 UTC m=+0.059514197 container create 7f930eb261ce961303e6f9965c248a517871e5aae243f8b132d554345172bdcb (image=quay.io/ceph/ceph:v20, name=awesome_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:12:03 compute-0 systemd[1]: Started libpod-conmon-7f930eb261ce961303e6f9965c248a517871e5aae243f8b132d554345172bdcb.scope.
Jan 29 09:12:03 compute-0 podman[76416]: 2026-01-29 09:12:02.99008378 +0000 UTC m=+0.022528099 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:03 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d4088189a83955e7dd398ce58cc28b7a8589c1113e3e4fd77baa736355dd96/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d4088189a83955e7dd398ce58cc28b7a8589c1113e3e4fd77baa736355dd96/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052666 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d4088189a83955e7dd398ce58cc28b7a8589c1113e3e4fd77baa736355dd96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d4088189a83955e7dd398ce58cc28b7a8589c1113e3e4fd77baa736355dd96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d4088189a83955e7dd398ce58cc28b7a8589c1113e3e4fd77baa736355dd96/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:03 compute-0 podman[76416]: 2026-01-29 09:12:03.11493511 +0000 UTC m=+0.147379429 container init 7f930eb261ce961303e6f9965c248a517871e5aae243f8b132d554345172bdcb (image=quay.io/ceph/ceph:v20, name=awesome_bardeen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 29 09:12:03 compute-0 podman[76416]: 2026-01-29 09:12:03.119968666 +0000 UTC m=+0.152412985 container start 7f930eb261ce961303e6f9965c248a517871e5aae243f8b132d554345172bdcb (image=quay.io/ceph/ceph:v20, name=awesome_bardeen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 09:12:03 compute-0 podman[76416]: 2026-01-29 09:12:03.124054996 +0000 UTC m=+0.156499295 container attach 7f930eb261ce961303e6f9965c248a517871e5aae243f8b132d554345172bdcb (image=quay.io/ceph/ceph:v20, name=awesome_bardeen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:12:03 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Jan 29 09:12:03 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:03 compute-0 ceph-mgr[75473]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 29 09:12:03 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 29 09:12:03 compute-0 systemd[1]: libpod-7f930eb261ce961303e6f9965c248a517871e5aae243f8b132d554345172bdcb.scope: Deactivated successfully.
Jan 29 09:12:03 compute-0 podman[76416]: 2026-01-29 09:12:03.575854572 +0000 UTC m=+0.608298871 container died 7f930eb261ce961303e6f9965c248a517871e5aae243f8b132d554345172bdcb (image=quay.io/ceph/ceph:v20, name=awesome_bardeen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 29 09:12:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-68d4088189a83955e7dd398ce58cc28b7a8589c1113e3e4fd77baa736355dd96-merged.mount: Deactivated successfully.
Jan 29 09:12:03 compute-0 podman[76416]: 2026-01-29 09:12:03.613529199 +0000 UTC m=+0.645973498 container remove 7f930eb261ce961303e6f9965c248a517871e5aae243f8b132d554345172bdcb (image=quay.io/ceph/ceph:v20, name=awesome_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:12:03 compute-0 systemd[1]: libpod-conmon-7f930eb261ce961303e6f9965c248a517871e5aae243f8b132d554345172bdcb.scope: Deactivated successfully.
Jan 29 09:12:03 compute-0 podman[76470]: 2026-01-29 09:12:03.677583818 +0000 UTC m=+0.042069076 container create 5e6d7f377b7d2f428969f53a62d79aa856b685f53ebe91cf82fc759f93baa7c6 (image=quay.io/ceph/ceph:v20, name=amazing_austin, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:12:03 compute-0 systemd[1]: Started libpod-conmon-5e6d7f377b7d2f428969f53a62d79aa856b685f53ebe91cf82fc759f93baa7c6.scope.
Jan 29 09:12:03 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c6346d8db403fdd56df3e1082ed4f800104bb4aec20859eecb22648be30305/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c6346d8db403fdd56df3e1082ed4f800104bb4aec20859eecb22648be30305/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c6346d8db403fdd56df3e1082ed4f800104bb4aec20859eecb22648be30305/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:03 compute-0 podman[76470]: 2026-01-29 09:12:03.749082598 +0000 UTC m=+0.113567886 container init 5e6d7f377b7d2f428969f53a62d79aa856b685f53ebe91cf82fc759f93baa7c6 (image=quay.io/ceph/ceph:v20, name=amazing_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 09:12:03 compute-0 podman[76470]: 2026-01-29 09:12:03.658934905 +0000 UTC m=+0.023420193 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:03 compute-0 podman[76470]: 2026-01-29 09:12:03.753880288 +0000 UTC m=+0.118365546 container start 5e6d7f377b7d2f428969f53a62d79aa856b685f53ebe91cf82fc759f93baa7c6 (image=quay.io/ceph/ceph:v20, name=amazing_austin, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 29 09:12:03 compute-0 podman[76470]: 2026-01-29 09:12:03.76285383 +0000 UTC m=+0.127339108 container attach 5e6d7f377b7d2f428969f53a62d79aa856b685f53ebe91cf82fc759f93baa7c6 (image=quay.io/ceph/ceph:v20, name=amazing_austin, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:12:03 compute-0 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 09:12:04 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:04 compute-0 amazing_austin[76486]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXkfi1IctYmMcZ3ODvx1fZDDcAagRYSPfhyQuFoawxsOtQntN1lvuN/tTk/w3djKA2zWYmUT+n46yPbS+gPAJi3U/DM9h7BiVj4tvpY/n2DBRsq1XsatES1sCburZC/hHNYKZmWYCyuk56wms9P90cdUwUTK1I1F3pliumk5ORHmHbLtY3m8Qujl1/5SJdcdPLXuQNzTt2pas+AhHLDcoRCBoychXCjgPsvh/HVZrmVFR61T+E7S9NEWH64N9wSsPDgSMgoHJvweHX/n4R8or/NJ3TPQstYO1WRu0DAOROhpNiivaPX7gjR+/rAiZQ47O8b3T1UZt43hETy12509L2HfgxMGGKcGz2ChPKtaJ14xIkIytoLg2CcBjB1iC0grm73REae+9mroM1Yaewqth/YbJXk5HRpasMLR2K/MUP/kpobscZh0YM9jo8751Qsli7kW6RXjKRiLuFT+iLYCrK1hXw95WTRZ7JNaVmCslMNdsUEn4TXqIwqZ8FanIuuNc= zuul@controller
Jan 29 09:12:04 compute-0 systemd[1]: libpod-5e6d7f377b7d2f428969f53a62d79aa856b685f53ebe91cf82fc759f93baa7c6.scope: Deactivated successfully.
Jan 29 09:12:04 compute-0 conmon[76486]: conmon 5e6d7f377b7d2f428969 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5e6d7f377b7d2f428969f53a62d79aa856b685f53ebe91cf82fc759f93baa7c6.scope/container/memory.events
Jan 29 09:12:04 compute-0 podman[76470]: 2026-01-29 09:12:04.175078108 +0000 UTC m=+0.539563376 container died 5e6d7f377b7d2f428969f53a62d79aa856b685f53ebe91cf82fc759f93baa7c6 (image=quay.io/ceph/ceph:v20, name=amazing_austin, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:12:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-64c6346d8db403fdd56df3e1082ed4f800104bb4aec20859eecb22648be30305-merged.mount: Deactivated successfully.
Jan 29 09:12:04 compute-0 podman[76470]: 2026-01-29 09:12:04.209497577 +0000 UTC m=+0.573982835 container remove 5e6d7f377b7d2f428969f53a62d79aa856b685f53ebe91cf82fc759f93baa7c6 (image=quay.io/ceph/ceph:v20, name=amazing_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 29 09:12:04 compute-0 systemd[1]: libpod-conmon-5e6d7f377b7d2f428969f53a62d79aa856b685f53ebe91cf82fc759f93baa7c6.scope: Deactivated successfully.
Jan 29 09:12:04 compute-0 podman[76523]: 2026-01-29 09:12:04.266622269 +0000 UTC m=+0.039159638 container create eee71c106998b6b5d6e29eb54d43541a56bb33eab13dbe8a62935364161e9ddb (image=quay.io/ceph/ceph:v20, name=lucid_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 29 09:12:04 compute-0 systemd[1]: Started libpod-conmon-eee71c106998b6b5d6e29eb54d43541a56bb33eab13dbe8a62935364161e9ddb.scope.
Jan 29 09:12:04 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe51fa2d606811811304b0642d9f8f392058ab2340293d4d0f7e1aabfde5a148/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe51fa2d606811811304b0642d9f8f392058ab2340293d4d0f7e1aabfde5a148/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe51fa2d606811811304b0642d9f8f392058ab2340293d4d0f7e1aabfde5a148/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:04 compute-0 podman[76523]: 2026-01-29 09:12:04.331374617 +0000 UTC m=+0.103912006 container init eee71c106998b6b5d6e29eb54d43541a56bb33eab13dbe8a62935364161e9ddb (image=quay.io/ceph/ceph:v20, name=lucid_colden, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:12:04 compute-0 podman[76523]: 2026-01-29 09:12:04.335844788 +0000 UTC m=+0.108382157 container start eee71c106998b6b5d6e29eb54d43541a56bb33eab13dbe8a62935364161e9ddb (image=quay.io/ceph/ceph:v20, name=lucid_colden, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:12:04 compute-0 podman[76523]: 2026-01-29 09:12:04.340026601 +0000 UTC m=+0.112564160 container attach eee71c106998b6b5d6e29eb54d43541a56bb33eab13dbe8a62935364161e9ddb (image=quay.io/ceph/ceph:v20, name=lucid_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 29 09:12:04 compute-0 podman[76523]: 2026-01-29 09:12:04.24887757 +0000 UTC m=+0.021414959 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:04 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:04 compute-0 ceph-mon[75183]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:04 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:04 compute-0 ceph-mon[75183]: Set ssh ssh_identity_pub
Jan 29 09:12:04 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:04 compute-0 sshd-session[76566]: Accepted publickey for ceph-admin from 192.168.122.100 port 50572 ssh2: RSA SHA256:TE3qk6UeXQNixiVnNk32+m51NOuCaiLmMMgUlYFkcfA
Jan 29 09:12:04 compute-0 systemd-logind[799]: New session 20 of user ceph-admin.
Jan 29 09:12:04 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 29 09:12:04 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 29 09:12:04 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 29 09:12:04 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 29 09:12:04 compute-0 systemd[76570]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 29 09:12:05 compute-0 systemd[76570]: Queued start job for default target Main User Target.
Jan 29 09:12:05 compute-0 systemd[76570]: Created slice User Application Slice.
Jan 29 09:12:05 compute-0 systemd[76570]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 29 09:12:05 compute-0 systemd[76570]: Started Daily Cleanup of User's Temporary Directories.
Jan 29 09:12:05 compute-0 systemd[76570]: Reached target Paths.
Jan 29 09:12:05 compute-0 systemd[76570]: Reached target Timers.
Jan 29 09:12:05 compute-0 systemd[76570]: Starting D-Bus User Message Bus Socket...
Jan 29 09:12:05 compute-0 systemd[76570]: Starting Create User's Volatile Files and Directories...
Jan 29 09:12:05 compute-0 systemd[76570]: Finished Create User's Volatile Files and Directories.
Jan 29 09:12:05 compute-0 systemd[76570]: Listening on D-Bus User Message Bus Socket.
Jan 29 09:12:05 compute-0 systemd[76570]: Reached target Sockets.
Jan 29 09:12:05 compute-0 systemd[76570]: Reached target Basic System.
Jan 29 09:12:05 compute-0 systemd[76570]: Reached target Main User Target.
Jan 29 09:12:05 compute-0 systemd[76570]: Startup finished in 125ms.
Jan 29 09:12:05 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 29 09:12:05 compute-0 systemd[1]: Started Session 20 of User ceph-admin.
Jan 29 09:12:05 compute-0 sshd-session[76566]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 29 09:12:05 compute-0 sshd-session[76586]: Accepted publickey for ceph-admin from 192.168.122.100 port 50588 ssh2: RSA SHA256:TE3qk6UeXQNixiVnNk32+m51NOuCaiLmMMgUlYFkcfA
Jan 29 09:12:05 compute-0 systemd-logind[799]: New session 22 of user ceph-admin.
Jan 29 09:12:05 compute-0 systemd[1]: Started Session 22 of User ceph-admin.
Jan 29 09:12:05 compute-0 sshd-session[76586]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 29 09:12:05 compute-0 sudo[76590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:12:05 compute-0 sudo[76590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:05 compute-0 sudo[76590]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:05 compute-0 sshd-session[76615]: Accepted publickey for ceph-admin from 192.168.122.100 port 50602 ssh2: RSA SHA256:TE3qk6UeXQNixiVnNk32+m51NOuCaiLmMMgUlYFkcfA
Jan 29 09:12:05 compute-0 systemd-logind[799]: New session 23 of user ceph-admin.
Jan 29 09:12:05 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Jan 29 09:12:05 compute-0 sshd-session[76615]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 29 09:12:05 compute-0 ceph-mon[75183]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:05 compute-0 ceph-mon[75183]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:05 compute-0 sudo[76619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 29 09:12:05 compute-0 sudo[76619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:05 compute-0 sudo[76619]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:05 compute-0 sshd-session[76644]: Accepted publickey for ceph-admin from 192.168.122.100 port 50618 ssh2: RSA SHA256:TE3qk6UeXQNixiVnNk32+m51NOuCaiLmMMgUlYFkcfA
Jan 29 09:12:05 compute-0 systemd-logind[799]: New session 24 of user ceph-admin.
Jan 29 09:12:05 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Jan 29 09:12:05 compute-0 sshd-session[76644]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 29 09:12:05 compute-0 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 09:12:05 compute-0 sudo[76648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Jan 29 09:12:05 compute-0 sudo[76648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:05 compute-0 sudo[76648]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:05 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 29 09:12:05 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 29 09:12:06 compute-0 sshd-session[76673]: Accepted publickey for ceph-admin from 192.168.122.100 port 50634 ssh2: RSA SHA256:TE3qk6UeXQNixiVnNk32+m51NOuCaiLmMMgUlYFkcfA
Jan 29 09:12:06 compute-0 systemd-logind[799]: New session 25 of user ceph-admin.
Jan 29 09:12:06 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Jan 29 09:12:06 compute-0 sshd-session[76673]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 29 09:12:06 compute-0 sudo[76677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:12:06 compute-0 sudo[76677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:06 compute-0 sudo[76677]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:06 compute-0 sshd-session[76702]: Accepted publickey for ceph-admin from 192.168.122.100 port 50648 ssh2: RSA SHA256:TE3qk6UeXQNixiVnNk32+m51NOuCaiLmMMgUlYFkcfA
Jan 29 09:12:06 compute-0 systemd-logind[799]: New session 26 of user ceph-admin.
Jan 29 09:12:06 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Jan 29 09:12:06 compute-0 sshd-session[76702]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 29 09:12:06 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:06 compute-0 sudo[76706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:12:06 compute-0 sudo[76706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:06 compute-0 ceph-mon[75183]: Deploying cephadm binary to compute-0
Jan 29 09:12:06 compute-0 sudo[76706]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:06 compute-0 sshd-session[76731]: Accepted publickey for ceph-admin from 192.168.122.100 port 50652 ssh2: RSA SHA256:TE3qk6UeXQNixiVnNk32+m51NOuCaiLmMMgUlYFkcfA
Jan 29 09:12:06 compute-0 systemd-logind[799]: New session 27 of user ceph-admin.
Jan 29 09:12:06 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Jan 29 09:12:06 compute-0 sshd-session[76731]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 29 09:12:06 compute-0 sudo[76735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Jan 29 09:12:06 compute-0 sudo[76735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:06 compute-0 sudo[76735]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:07 compute-0 sshd-session[76760]: Accepted publickey for ceph-admin from 192.168.122.100 port 50664 ssh2: RSA SHA256:TE3qk6UeXQNixiVnNk32+m51NOuCaiLmMMgUlYFkcfA
Jan 29 09:12:07 compute-0 systemd-logind[799]: New session 28 of user ceph-admin.
Jan 29 09:12:07 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Jan 29 09:12:07 compute-0 sshd-session[76760]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 29 09:12:07 compute-0 sudo[76764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:12:07 compute-0 sudo[76764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:07 compute-0 sudo[76764]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:07 compute-0 sshd-session[76789]: Accepted publickey for ceph-admin from 192.168.122.100 port 50680 ssh2: RSA SHA256:TE3qk6UeXQNixiVnNk32+m51NOuCaiLmMMgUlYFkcfA
Jan 29 09:12:07 compute-0 systemd-logind[799]: New session 29 of user ceph-admin.
Jan 29 09:12:07 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Jan 29 09:12:07 compute-0 sshd-session[76789]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 29 09:12:07 compute-0 sudo[76793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Jan 29 09:12:07 compute-0 sudo[76793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:07 compute-0 sudo[76793]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:07 compute-0 sshd-session[76818]: Accepted publickey for ceph-admin from 192.168.122.100 port 50694 ssh2: RSA SHA256:TE3qk6UeXQNixiVnNk32+m51NOuCaiLmMMgUlYFkcfA
Jan 29 09:12:07 compute-0 systemd-logind[799]: New session 30 of user ceph-admin.
Jan 29 09:12:07 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Jan 29 09:12:07 compute-0 sshd-session[76818]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 29 09:12:07 compute-0 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 09:12:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054703 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:12:08 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:09 compute-0 sshd-session[76845]: Accepted publickey for ceph-admin from 192.168.122.100 port 50696 ssh2: RSA SHA256:TE3qk6UeXQNixiVnNk32+m51NOuCaiLmMMgUlYFkcfA
Jan 29 09:12:09 compute-0 systemd-logind[799]: New session 31 of user ceph-admin.
Jan 29 09:12:09 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Jan 29 09:12:09 compute-0 sshd-session[76845]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 29 09:12:09 compute-0 sudo[76849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Jan 29 09:12:09 compute-0 sudo[76849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:09 compute-0 sudo[76849]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:09 compute-0 sshd-session[76874]: Accepted publickey for ceph-admin from 192.168.122.100 port 50708 ssh2: RSA SHA256:TE3qk6UeXQNixiVnNk32+m51NOuCaiLmMMgUlYFkcfA
Jan 29 09:12:09 compute-0 systemd-logind[799]: New session 32 of user ceph-admin.
Jan 29 09:12:09 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Jan 29 09:12:09 compute-0 sshd-session[76874]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 29 09:12:09 compute-0 sudo[76878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 29 09:12:09 compute-0 sudo[76878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:09 compute-0 sudo[76878]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 29 09:12:09 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:09 compute-0 ceph-mgr[75473]: [cephadm INFO root] Added host compute-0
Jan 29 09:12:09 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 29 09:12:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 29 09:12:09 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 29 09:12:09 compute-0 lucid_colden[76540]: Added host 'compute-0' with addr '192.168.122.100'
Jan 29 09:12:09 compute-0 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 09:12:09 compute-0 systemd[1]: libpod-eee71c106998b6b5d6e29eb54d43541a56bb33eab13dbe8a62935364161e9ddb.scope: Deactivated successfully.
Jan 29 09:12:09 compute-0 podman[76523]: 2026-01-29 09:12:09.961466356 +0000 UTC m=+5.734003735 container died eee71c106998b6b5d6e29eb54d43541a56bb33eab13dbe8a62935364161e9ddb (image=quay.io/ceph/ceph:v20, name=lucid_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Jan 29 09:12:09 compute-0 sudo[76924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:12:09 compute-0 sudo[76924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe51fa2d606811811304b0642d9f8f392058ab2340293d4d0f7e1aabfde5a148-merged.mount: Deactivated successfully.
Jan 29 09:12:09 compute-0 sudo[76924]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:10 compute-0 podman[76523]: 2026-01-29 09:12:10.000944961 +0000 UTC m=+5.773482340 container remove eee71c106998b6b5d6e29eb54d43541a56bb33eab13dbe8a62935364161e9ddb (image=quay.io/ceph/ceph:v20, name=lucid_colden, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 29 09:12:10 compute-0 systemd[1]: libpod-conmon-eee71c106998b6b5d6e29eb54d43541a56bb33eab13dbe8a62935364161e9ddb.scope: Deactivated successfully.
Jan 29 09:12:10 compute-0 sudo[76962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 pull
Jan 29 09:12:10 compute-0 sudo[76962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:10 compute-0 podman[76967]: 2026-01-29 09:12:10.063638954 +0000 UTC m=+0.043757262 container create 6d7cca628380d837da13c5fe1b8e1cb977a77b4ba794dc5ed03dd6f039f2def1 (image=quay.io/ceph/ceph:v20, name=mystifying_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 09:12:10 compute-0 systemd[1]: Started libpod-conmon-6d7cca628380d837da13c5fe1b8e1cb977a77b4ba794dc5ed03dd6f039f2def1.scope.
Jan 29 09:12:10 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59dcae10e6cfc2cd92314b738f50c4f4032e8f8dbed5c74fd73508b03dc63a34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59dcae10e6cfc2cd92314b738f50c4f4032e8f8dbed5c74fd73508b03dc63a34/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59dcae10e6cfc2cd92314b738f50c4f4032e8f8dbed5c74fd73508b03dc63a34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:10 compute-0 podman[76967]: 2026-01-29 09:12:10.044915228 +0000 UTC m=+0.025033556 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:10 compute-0 podman[76967]: 2026-01-29 09:12:10.145624577 +0000 UTC m=+0.125742915 container init 6d7cca628380d837da13c5fe1b8e1cb977a77b4ba794dc5ed03dd6f039f2def1 (image=quay.io/ceph/ceph:v20, name=mystifying_feistel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 09:12:10 compute-0 podman[76967]: 2026-01-29 09:12:10.152482642 +0000 UTC m=+0.132600950 container start 6d7cca628380d837da13c5fe1b8e1cb977a77b4ba794dc5ed03dd6f039f2def1 (image=quay.io/ceph/ceph:v20, name=mystifying_feistel, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:12:10 compute-0 podman[76967]: 2026-01-29 09:12:10.156026708 +0000 UTC m=+0.136145016 container attach 6d7cca628380d837da13c5fe1b8e1cb977a77b4ba794dc5ed03dd6f039f2def1 (image=quay.io/ceph/ceph:v20, name=mystifying_feistel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:12:10 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:10 compute-0 ceph-mgr[75473]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 29 09:12:10 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 29 09:12:10 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 29 09:12:10 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:10 compute-0 mystifying_feistel[77003]: Scheduled mon update...
Jan 29 09:12:10 compute-0 systemd[1]: libpod-6d7cca628380d837da13c5fe1b8e1cb977a77b4ba794dc5ed03dd6f039f2def1.scope: Deactivated successfully.
Jan 29 09:12:10 compute-0 podman[76967]: 2026-01-29 09:12:10.643726003 +0000 UTC m=+0.623844321 container died 6d7cca628380d837da13c5fe1b8e1cb977a77b4ba794dc5ed03dd6f039f2def1 (image=quay.io/ceph/ceph:v20, name=mystifying_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:12:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-59dcae10e6cfc2cd92314b738f50c4f4032e8f8dbed5c74fd73508b03dc63a34-merged.mount: Deactivated successfully.
Jan 29 09:12:10 compute-0 podman[76967]: 2026-01-29 09:12:10.684624867 +0000 UTC m=+0.664743175 container remove 6d7cca628380d837da13c5fe1b8e1cb977a77b4ba794dc5ed03dd6f039f2def1 (image=quay.io/ceph/ceph:v20, name=mystifying_feistel, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:12:10 compute-0 systemd[1]: libpod-conmon-6d7cca628380d837da13c5fe1b8e1cb977a77b4ba794dc5ed03dd6f039f2def1.scope: Deactivated successfully.
Jan 29 09:12:10 compute-0 podman[77068]: 2026-01-29 09:12:10.737539505 +0000 UTC m=+0.037669098 container create a4a30810f755f6758044e18160dce16318df35360ecfa96d22554dcff2721f19 (image=quay.io/ceph/ceph:v20, name=peaceful_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 09:12:10 compute-0 systemd[1]: Started libpod-conmon-a4a30810f755f6758044e18160dce16318df35360ecfa96d22554dcff2721f19.scope.
Jan 29 09:12:10 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc7d8950cf71266e6d3bd019194074ce7385833c979894ad2a6b5501616ea5a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc7d8950cf71266e6d3bd019194074ce7385833c979894ad2a6b5501616ea5a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc7d8950cf71266e6d3bd019194074ce7385833c979894ad2a6b5501616ea5a0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:10 compute-0 podman[77068]: 2026-01-29 09:12:10.810626398 +0000 UTC m=+0.110756011 container init a4a30810f755f6758044e18160dce16318df35360ecfa96d22554dcff2721f19 (image=quay.io/ceph/ceph:v20, name=peaceful_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:12:10 compute-0 podman[77068]: 2026-01-29 09:12:10.816193388 +0000 UTC m=+0.116322981 container start a4a30810f755f6758044e18160dce16318df35360ecfa96d22554dcff2721f19 (image=quay.io/ceph/ceph:v20, name=peaceful_feistel, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:12:10 compute-0 podman[77068]: 2026-01-29 09:12:10.72069218 +0000 UTC m=+0.020821783 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:10 compute-0 podman[77068]: 2026-01-29 09:12:10.821774609 +0000 UTC m=+0.121904362 container attach a4a30810f755f6758044e18160dce16318df35360ecfa96d22554dcff2721f19 (image=quay.io/ceph/ceph:v20, name=peaceful_feistel, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:12:10 compute-0 podman[77039]: 2026-01-29 09:12:10.823507926 +0000 UTC m=+0.538725074 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:10 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:10 compute-0 ceph-mon[75183]: Added host compute-0
Jan 29 09:12:10 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 29 09:12:10 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:10 compute-0 podman[77103]: 2026-01-29 09:12:10.944820261 +0000 UTC m=+0.051099081 container create 98fcb158bd4a7f77f60369ffeb3635ba798e0934539a7813601154701459623a (image=quay.io/ceph/ceph:v20, name=magical_almeida, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 29 09:12:10 compute-0 systemd[1]: Started libpod-conmon-98fcb158bd4a7f77f60369ffeb3635ba798e0934539a7813601154701459623a.scope.
Jan 29 09:12:10 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:11 compute-0 podman[77103]: 2026-01-29 09:12:11.014838691 +0000 UTC m=+0.121117531 container init 98fcb158bd4a7f77f60369ffeb3635ba798e0934539a7813601154701459623a (image=quay.io/ceph/ceph:v20, name=magical_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 09:12:11 compute-0 podman[77103]: 2026-01-29 09:12:10.925756066 +0000 UTC m=+0.032034916 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:11 compute-0 podman[77103]: 2026-01-29 09:12:11.027583655 +0000 UTC m=+0.133862475 container start 98fcb158bd4a7f77f60369ffeb3635ba798e0934539a7813601154701459623a (image=quay.io/ceph/ceph:v20, name=magical_almeida, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:12:11 compute-0 podman[77103]: 2026-01-29 09:12:11.033826903 +0000 UTC m=+0.140105733 container attach 98fcb158bd4a7f77f60369ffeb3635ba798e0934539a7813601154701459623a (image=quay.io/ceph/ceph:v20, name=magical_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 29 09:12:11 compute-0 magical_almeida[77138]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 29 09:12:11 compute-0 systemd[1]: libpod-98fcb158bd4a7f77f60369ffeb3635ba798e0934539a7813601154701459623a.scope: Deactivated successfully.
Jan 29 09:12:11 compute-0 podman[77103]: 2026-01-29 09:12:11.1815165 +0000 UTC m=+0.287795340 container died 98fcb158bd4a7f77f60369ffeb3635ba798e0934539a7813601154701459623a (image=quay.io/ceph/ceph:v20, name=magical_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:12:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bb5f1e90d81bf79c9453dc9e02fa9083b6bba2013e31977d53f2a840bb1c9ab-merged.mount: Deactivated successfully.
Jan 29 09:12:11 compute-0 podman[77103]: 2026-01-29 09:12:11.220330508 +0000 UTC m=+0.326609318 container remove 98fcb158bd4a7f77f60369ffeb3635ba798e0934539a7813601154701459623a (image=quay.io/ceph/ceph:v20, name=magical_almeida, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 29 09:12:11 compute-0 systemd[1]: libpod-conmon-98fcb158bd4a7f77f60369ffeb3635ba798e0934539a7813601154701459623a.scope: Deactivated successfully.
Jan 29 09:12:11 compute-0 sudo[76962]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:11 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Jan 29 09:12:11 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:11 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:11 compute-0 ceph-mgr[75473]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 29 09:12:11 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 29 09:12:11 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 29 09:12:11 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:11 compute-0 peaceful_feistel[77084]: Scheduled mgr update...
Jan 29 09:12:11 compute-0 sudo[77155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:12:11 compute-0 sudo[77155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:11 compute-0 sudo[77155]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:11 compute-0 systemd[1]: libpod-a4a30810f755f6758044e18160dce16318df35360ecfa96d22554dcff2721f19.scope: Deactivated successfully.
Jan 29 09:12:11 compute-0 podman[77068]: 2026-01-29 09:12:11.369089543 +0000 UTC m=+0.669219136 container died a4a30810f755f6758044e18160dce16318df35360ecfa96d22554dcff2721f19 (image=quay.io/ceph/ceph:v20, name=peaceful_feistel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:12:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc7d8950cf71266e6d3bd019194074ce7385833c979894ad2a6b5501616ea5a0-merged.mount: Deactivated successfully.
Jan 29 09:12:11 compute-0 podman[77068]: 2026-01-29 09:12:11.408370314 +0000 UTC m=+0.708499897 container remove a4a30810f755f6758044e18160dce16318df35360ecfa96d22554dcff2721f19 (image=quay.io/ceph/ceph:v20, name=peaceful_feistel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 29 09:12:11 compute-0 systemd[1]: libpod-conmon-a4a30810f755f6758044e18160dce16318df35360ecfa96d22554dcff2721f19.scope: Deactivated successfully.
Jan 29 09:12:11 compute-0 sudo[77183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 29 09:12:11 compute-0 sudo[77183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:11 compute-0 podman[77217]: 2026-01-29 09:12:11.459134364 +0000 UTC m=+0.035164850 container create 438fd4ed0ec6dc81a580addd9f15928f498ddea073230f13b7986d6656f773ae (image=quay.io/ceph/ceph:v20, name=zealous_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 29 09:12:11 compute-0 systemd[1]: Started libpod-conmon-438fd4ed0ec6dc81a580addd9f15928f498ddea073230f13b7986d6656f773ae.scope.
Jan 29 09:12:11 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd8d7746c677765686256c01c0e57bb679517653392f4757a0b09fe2e01abc77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd8d7746c677765686256c01c0e57bb679517653392f4757a0b09fe2e01abc77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd8d7746c677765686256c01c0e57bb679517653392f4757a0b09fe2e01abc77/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:11 compute-0 podman[77217]: 2026-01-29 09:12:11.444075858 +0000 UTC m=+0.020106364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:11 compute-0 podman[77217]: 2026-01-29 09:12:11.547380226 +0000 UTC m=+0.123410742 container init 438fd4ed0ec6dc81a580addd9f15928f498ddea073230f13b7986d6656f773ae (image=quay.io/ceph/ceph:v20, name=zealous_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 29 09:12:11 compute-0 podman[77217]: 2026-01-29 09:12:11.552502195 +0000 UTC m=+0.128532681 container start 438fd4ed0ec6dc81a580addd9f15928f498ddea073230f13b7986d6656f773ae (image=quay.io/ceph/ceph:v20, name=zealous_tesla, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:12:11 compute-0 podman[77217]: 2026-01-29 09:12:11.57010306 +0000 UTC m=+0.146133556 container attach 438fd4ed0ec6dc81a580addd9f15928f498ddea073230f13b7986d6656f773ae (image=quay.io/ceph/ceph:v20, name=zealous_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Jan 29 09:12:11 compute-0 sudo[77183]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:11 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:12:11 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:11 compute-0 sudo[77279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:12:11 compute-0 sudo[77279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:11 compute-0 sudo[77279]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:11 compute-0 sudo[77304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 29 09:12:11 compute-0 sudo[77304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:11 compute-0 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 09:12:11 compute-0 ceph-mon[75183]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:11 compute-0 ceph-mon[75183]: Saving service mon spec with placement count:5
Jan 29 09:12:11 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:11 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:11 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:11 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:11 compute-0 ceph-mgr[75473]: [cephadm INFO root] Saving service crash spec with placement *
Jan 29 09:12:11 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 29 09:12:11 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 29 09:12:11 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:12 compute-0 zealous_tesla[77234]: Scheduled crash update...
Jan 29 09:12:12 compute-0 systemd[1]: libpod-438fd4ed0ec6dc81a580addd9f15928f498ddea073230f13b7986d6656f773ae.scope: Deactivated successfully.
Jan 29 09:12:12 compute-0 podman[77217]: 2026-01-29 09:12:12.024448075 +0000 UTC m=+0.600478561 container died 438fd4ed0ec6dc81a580addd9f15928f498ddea073230f13b7986d6656f773ae (image=quay.io/ceph/ceph:v20, name=zealous_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 29 09:12:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd8d7746c677765686256c01c0e57bb679517653392f4757a0b09fe2e01abc77-merged.mount: Deactivated successfully.
Jan 29 09:12:12 compute-0 podman[77217]: 2026-01-29 09:12:12.067593519 +0000 UTC m=+0.643624005 container remove 438fd4ed0ec6dc81a580addd9f15928f498ddea073230f13b7986d6656f773ae (image=quay.io/ceph/ceph:v20, name=zealous_tesla, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 29 09:12:12 compute-0 systemd[1]: libpod-conmon-438fd4ed0ec6dc81a580addd9f15928f498ddea073230f13b7986d6656f773ae.scope: Deactivated successfully.
Jan 29 09:12:12 compute-0 podman[77343]: 2026-01-29 09:12:12.128055071 +0000 UTC m=+0.044196724 container create 7e04ff6c2b3439ca176366b7bf5c8d0feb2bf0b3b00d8735f4590cc04aeaaecf (image=quay.io/ceph/ceph:v20, name=dreamy_euclid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 29 09:12:12 compute-0 systemd[1]: Started libpod-conmon-7e04ff6c2b3439ca176366b7bf5c8d0feb2bf0b3b00d8735f4590cc04aeaaecf.scope.
Jan 29 09:12:12 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfcd64ddd40104aea3a5ede42f6c29aeb589dc4280e29e6475f30b0446ec947/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfcd64ddd40104aea3a5ede42f6c29aeb589dc4280e29e6475f30b0446ec947/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfcd64ddd40104aea3a5ede42f6c29aeb589dc4280e29e6475f30b0446ec947/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:12 compute-0 podman[77343]: 2026-01-29 09:12:12.202992584 +0000 UTC m=+0.119134257 container init 7e04ff6c2b3439ca176366b7bf5c8d0feb2bf0b3b00d8735f4590cc04aeaaecf (image=quay.io/ceph/ceph:v20, name=dreamy_euclid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:12:12 compute-0 podman[77343]: 2026-01-29 09:12:12.106935041 +0000 UTC m=+0.023076714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:12 compute-0 podman[77343]: 2026-01-29 09:12:12.211106823 +0000 UTC m=+0.127248476 container start 7e04ff6c2b3439ca176366b7bf5c8d0feb2bf0b3b00d8735f4590cc04aeaaecf (image=quay.io/ceph/ceph:v20, name=dreamy_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:12:12 compute-0 podman[77343]: 2026-01-29 09:12:12.215676547 +0000 UTC m=+0.131818200 container attach 7e04ff6c2b3439ca176366b7bf5c8d0feb2bf0b3b00d8735f4590cc04aeaaecf (image=quay.io/ceph/ceph:v20, name=dreamy_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 09:12:12 compute-0 podman[77405]: 2026-01-29 09:12:12.316240191 +0000 UTC m=+0.060081653 container exec 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:12:12 compute-0 podman[77405]: 2026-01-29 09:12:12.405283155 +0000 UTC m=+0.149124597 container exec_died 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 09:12:12 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:12 compute-0 sudo[77304]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:12:12 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Jan 29 09:12:12 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1231614367' entity='client.admin' 
Jan 29 09:12:12 compute-0 systemd[1]: libpod-7e04ff6c2b3439ca176366b7bf5c8d0feb2bf0b3b00d8735f4590cc04aeaaecf.scope: Deactivated successfully.
Jan 29 09:12:12 compute-0 podman[77343]: 2026-01-29 09:12:12.682230171 +0000 UTC m=+0.598371844 container died 7e04ff6c2b3439ca176366b7bf5c8d0feb2bf0b3b00d8735f4590cc04aeaaecf (image=quay.io/ceph/ceph:v20, name=dreamy_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 29 09:12:12 compute-0 sudo[77502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:12:12 compute-0 sudo[77502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cfcd64ddd40104aea3a5ede42f6c29aeb589dc4280e29e6475f30b0446ec947-merged.mount: Deactivated successfully.
Jan 29 09:12:12 compute-0 sudo[77502]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:12 compute-0 podman[77343]: 2026-01-29 09:12:12.720059982 +0000 UTC m=+0.636201635 container remove 7e04ff6c2b3439ca176366b7bf5c8d0feb2bf0b3b00d8735f4590cc04aeaaecf (image=quay.io/ceph/ceph:v20, name=dreamy_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:12:12 compute-0 systemd[1]: libpod-conmon-7e04ff6c2b3439ca176366b7bf5c8d0feb2bf0b3b00d8735f4590cc04aeaaecf.scope: Deactivated successfully.
Jan 29 09:12:12 compute-0 sudo[77540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:12:12 compute-0 sudo[77540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:12 compute-0 podman[77547]: 2026-01-29 09:12:12.77776824 +0000 UTC m=+0.039981021 container create 0d2deacac7f55827193ca2eb3235683661301f39dbd0f4b807db3798808495af (image=quay.io/ceph/ceph:v20, name=condescending_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:12:12 compute-0 systemd[1]: Started libpod-conmon-0d2deacac7f55827193ca2eb3235683661301f39dbd0f4b807db3798808495af.scope.
Jan 29 09:12:12 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f13cf477af2dc9318c437a611068a4e28fe83e08cde0d2fc5dbaa600d2fa01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f13cf477af2dc9318c437a611068a4e28fe83e08cde0d2fc5dbaa600d2fa01/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f13cf477af2dc9318c437a611068a4e28fe83e08cde0d2fc5dbaa600d2fa01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:12 compute-0 podman[77547]: 2026-01-29 09:12:12.855687712 +0000 UTC m=+0.117900503 container init 0d2deacac7f55827193ca2eb3235683661301f39dbd0f4b807db3798808495af (image=quay.io/ceph/ceph:v20, name=condescending_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 29 09:12:12 compute-0 podman[77547]: 2026-01-29 09:12:12.758777947 +0000 UTC m=+0.020990758 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:12 compute-0 podman[77547]: 2026-01-29 09:12:12.861228142 +0000 UTC m=+0.123440923 container start 0d2deacac7f55827193ca2eb3235683661301f39dbd0f4b807db3798808495af (image=quay.io/ceph/ceph:v20, name=condescending_hertz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Jan 29 09:12:12 compute-0 podman[77547]: 2026-01-29 09:12:12.887586123 +0000 UTC m=+0.149798934 container attach 0d2deacac7f55827193ca2eb3235683661301f39dbd0f4b807db3798808495af (image=quay.io/ceph/ceph:v20, name=condescending_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 09:12:13 compute-0 ceph-mon[75183]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:13 compute-0 ceph-mon[75183]: Saving service mgr spec with placement count:2
Jan 29 09:12:13 compute-0 ceph-mon[75183]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:13 compute-0 ceph-mon[75183]: Saving service crash spec with placement *
Jan 29 09:12:13 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:13 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:13 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1231614367' entity='client.admin' 
Jan 29 09:12:13 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77616 (sysctl)
Jan 29 09:12:13 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 29 09:12:13 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 29 09:12:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:12:13 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Jan 29 09:12:13 compute-0 sudo[77540]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:13 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:13 compute-0 systemd[1]: libpod-0d2deacac7f55827193ca2eb3235683661301f39dbd0f4b807db3798808495af.scope: Deactivated successfully.
Jan 29 09:12:13 compute-0 conmon[77582]: conmon 0d2deacac7f55827193c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d2deacac7f55827193ca2eb3235683661301f39dbd0f4b807db3798808495af.scope/container/memory.events
Jan 29 09:12:13 compute-0 podman[77547]: 2026-01-29 09:12:13.307647673 +0000 UTC m=+0.569860464 container died 0d2deacac7f55827193ca2eb3235683661301f39dbd0f4b807db3798808495af (image=quay.io/ceph/ceph:v20, name=condescending_hertz, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:12:13 compute-0 sudo[77639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:12:13 compute-0 sudo[77639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:13 compute-0 sudo[77639]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-37f13cf477af2dc9318c437a611068a4e28fe83e08cde0d2fc5dbaa600d2fa01-merged.mount: Deactivated successfully.
Jan 29 09:12:13 compute-0 podman[77547]: 2026-01-29 09:12:13.358011962 +0000 UTC m=+0.620224743 container remove 0d2deacac7f55827193ca2eb3235683661301f39dbd0f4b807db3798808495af (image=quay.io/ceph/ceph:v20, name=condescending_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:12:13 compute-0 systemd[1]: libpod-conmon-0d2deacac7f55827193ca2eb3235683661301f39dbd0f4b807db3798808495af.scope: Deactivated successfully.
Jan 29 09:12:13 compute-0 sudo[77676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 29 09:12:13 compute-0 sudo[77676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:13 compute-0 podman[77694]: 2026-01-29 09:12:13.431454365 +0000 UTC m=+0.053988079 container create a816b2f7ebf4d020abca57c4051b6f5d50a79277c25c44e4aeee7b6a48bad119 (image=quay.io/ceph/ceph:v20, name=upbeat_ride, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:12:13 compute-0 systemd[1]: Started libpod-conmon-a816b2f7ebf4d020abca57c4051b6f5d50a79277c25c44e4aeee7b6a48bad119.scope.
Jan 29 09:12:13 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42bb5f7383c4199515be01ace9edf51f9065786603b8638543d80c7fdc82e102/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42bb5f7383c4199515be01ace9edf51f9065786603b8638543d80c7fdc82e102/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42bb5f7383c4199515be01ace9edf51f9065786603b8638543d80c7fdc82e102/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:13 compute-0 podman[77694]: 2026-01-29 09:12:13.410510649 +0000 UTC m=+0.033044393 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:13 compute-0 podman[77694]: 2026-01-29 09:12:13.520746395 +0000 UTC m=+0.143280129 container init a816b2f7ebf4d020abca57c4051b6f5d50a79277c25c44e4aeee7b6a48bad119 (image=quay.io/ceph/ceph:v20, name=upbeat_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 29 09:12:13 compute-0 podman[77694]: 2026-01-29 09:12:13.53019511 +0000 UTC m=+0.152728824 container start a816b2f7ebf4d020abca57c4051b6f5d50a79277c25c44e4aeee7b6a48bad119 (image=quay.io/ceph/ceph:v20, name=upbeat_ride, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:12:13 compute-0 podman[77694]: 2026-01-29 09:12:13.533734376 +0000 UTC m=+0.156268120 container attach a816b2f7ebf4d020abca57c4051b6f5d50a79277c25c44e4aeee7b6a48bad119 (image=quay.io/ceph/ceph:v20, name=upbeat_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:12:13 compute-0 sudo[77676]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:12:13 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:13 compute-0 sudo[77758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:12:13 compute-0 sudo[77758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:13 compute-0 sudo[77758]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:13 compute-0 sudo[77783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- inventory --format=json-pretty --filter-for-batch
Jan 29 09:12:13 compute-0 sudo[77783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:13 compute-0 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 09:12:13 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 29 09:12:13 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:13 compute-0 ceph-mgr[75473]: [cephadm INFO root] Added label _admin to host compute-0
Jan 29 09:12:13 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 29 09:12:13 compute-0 upbeat_ride[77717]: Added label _admin to host compute-0
Jan 29 09:12:14 compute-0 systemd[1]: libpod-a816b2f7ebf4d020abca57c4051b6f5d50a79277c25c44e4aeee7b6a48bad119.scope: Deactivated successfully.
Jan 29 09:12:14 compute-0 podman[77812]: 2026-01-29 09:12:14.047423432 +0000 UTC m=+0.020514454 container died a816b2f7ebf4d020abca57c4051b6f5d50a79277c25c44e4aeee7b6a48bad119 (image=quay.io/ceph/ceph:v20, name=upbeat_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 29 09:12:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-42bb5f7383c4199515be01ace9edf51f9065786603b8638543d80c7fdc82e102-merged.mount: Deactivated successfully.
Jan 29 09:12:14 compute-0 podman[77812]: 2026-01-29 09:12:14.085144901 +0000 UTC m=+0.058235913 container remove a816b2f7ebf4d020abca57c4051b6f5d50a79277c25c44e4aeee7b6a48bad119 (image=quay.io/ceph/ceph:v20, name=upbeat_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:12:14 compute-0 systemd[1]: libpod-conmon-a816b2f7ebf4d020abca57c4051b6f5d50a79277c25c44e4aeee7b6a48bad119.scope: Deactivated successfully.
Jan 29 09:12:14 compute-0 podman[77837]: 2026-01-29 09:12:14.1251269 +0000 UTC m=+0.044342388 container create 8a7aed2463243e8ef02bf93c0c4f86978dfb03aca7b6e0b6b6239e8cdf2a7b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 29 09:12:14 compute-0 podman[77845]: 2026-01-29 09:12:14.157814972 +0000 UTC m=+0.045973222 container create b2bb06999c7140012eaed73ad266cf8287dad18fdc622ed09837c4d2c18cc96f (image=quay.io/ceph/ceph:v20, name=interesting_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:12:14 compute-0 systemd[1]: Started libpod-conmon-8a7aed2463243e8ef02bf93c0c4f86978dfb03aca7b6e0b6b6239e8cdf2a7b9b.scope.
Jan 29 09:12:14 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:14 compute-0 systemd[1]: Started libpod-conmon-b2bb06999c7140012eaed73ad266cf8287dad18fdc622ed09837c4d2c18cc96f.scope.
Jan 29 09:12:14 compute-0 podman[77837]: 2026-01-29 09:12:14.195766807 +0000 UTC m=+0.114982325 container init 8a7aed2463243e8ef02bf93c0c4f86978dfb03aca7b6e0b6b6239e8cdf2a7b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:12:14 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632be23d2c59d0b332c1f0ada801d8611e765d938b0a94babf9fcedb58e0e355/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632be23d2c59d0b332c1f0ada801d8611e765d938b0a94babf9fcedb58e0e355/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632be23d2c59d0b332c1f0ada801d8611e765d938b0a94babf9fcedb58e0e355/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:14 compute-0 podman[77837]: 2026-01-29 09:12:14.201425289 +0000 UTC m=+0.120640777 container start 8a7aed2463243e8ef02bf93c0c4f86978dfb03aca7b6e0b6b6239e8cdf2a7b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:12:14 compute-0 podman[77837]: 2026-01-29 09:12:14.107932656 +0000 UTC m=+0.027148174 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:12:14 compute-0 podman[77837]: 2026-01-29 09:12:14.208802669 +0000 UTC m=+0.128018177 container attach 8a7aed2463243e8ef02bf93c0c4f86978dfb03aca7b6e0b6b6239e8cdf2a7b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_allen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:12:14 compute-0 quizzical_allen[77866]: 167 167
Jan 29 09:12:14 compute-0 systemd[1]: libpod-8a7aed2463243e8ef02bf93c0c4f86978dfb03aca7b6e0b6b6239e8cdf2a7b9b.scope: Deactivated successfully.
Jan 29 09:12:14 compute-0 podman[77837]: 2026-01-29 09:12:14.214219115 +0000 UTC m=+0.133434603 container died 8a7aed2463243e8ef02bf93c0c4f86978dfb03aca7b6e0b6b6239e8cdf2a7b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 09:12:14 compute-0 podman[77845]: 2026-01-29 09:12:14.229268161 +0000 UTC m=+0.117426441 container init b2bb06999c7140012eaed73ad266cf8287dad18fdc622ed09837c4d2c18cc96f (image=quay.io/ceph/ceph:v20, name=interesting_keldysh, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:12:14 compute-0 podman[77845]: 2026-01-29 09:12:14.136305442 +0000 UTC m=+0.024463722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:14 compute-0 podman[77845]: 2026-01-29 09:12:14.235680494 +0000 UTC m=+0.123838744 container start b2bb06999c7140012eaed73ad266cf8287dad18fdc622ed09837c4d2c18cc96f (image=quay.io/ceph/ceph:v20, name=interesting_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:12:14 compute-0 podman[77845]: 2026-01-29 09:12:14.241053239 +0000 UTC m=+0.129211509 container attach b2bb06999c7140012eaed73ad266cf8287dad18fdc622ed09837c4d2c18cc96f (image=quay.io/ceph/ceph:v20, name=interesting_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 09:12:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-255d83760afa6def432be356750ff6183a3ad39d9206faa5c8f649f011fee9db-merged.mount: Deactivated successfully.
Jan 29 09:12:14 compute-0 podman[77837]: 2026-01-29 09:12:14.265042417 +0000 UTC m=+0.184257905 container remove 8a7aed2463243e8ef02bf93c0c4f86978dfb03aca7b6e0b6b6239e8cdf2a7b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_allen, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:12:14 compute-0 systemd[1]: libpod-conmon-8a7aed2463243e8ef02bf93c0c4f86978dfb03aca7b6e0b6b6239e8cdf2a7b9b.scope: Deactivated successfully.
Jan 29 09:12:14 compute-0 ceph-mon[75183]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:14 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:14 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:14 compute-0 ceph-mon[75183]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:14 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:14 compute-0 ceph-mon[75183]: Added label _admin to host compute-0
Jan 29 09:12:14 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:14 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Jan 29 09:12:14 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1581654331' entity='client.admin' 
Jan 29 09:12:14 compute-0 interesting_keldysh[77871]: set mgr/dashboard/cluster/status
Jan 29 09:12:14 compute-0 systemd[1]: libpod-b2bb06999c7140012eaed73ad266cf8287dad18fdc622ed09837c4d2c18cc96f.scope: Deactivated successfully.
Jan 29 09:12:14 compute-0 podman[77845]: 2026-01-29 09:12:14.843690387 +0000 UTC m=+0.731848637 container died b2bb06999c7140012eaed73ad266cf8287dad18fdc622ed09837c4d2c18cc96f (image=quay.io/ceph/ceph:v20, name=interesting_keldysh, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 09:12:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-632be23d2c59d0b332c1f0ada801d8611e765d938b0a94babf9fcedb58e0e355-merged.mount: Deactivated successfully.
Jan 29 09:12:14 compute-0 podman[77845]: 2026-01-29 09:12:14.877180901 +0000 UTC m=+0.765339151 container remove b2bb06999c7140012eaed73ad266cf8287dad18fdc622ed09837c4d2c18cc96f (image=quay.io/ceph/ceph:v20, name=interesting_keldysh, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 29 09:12:14 compute-0 systemd[1]: libpod-conmon-b2bb06999c7140012eaed73ad266cf8287dad18fdc622ed09837c4d2c18cc96f.scope: Deactivated successfully.
Jan 29 09:12:14 compute-0 systemd[1]: Reloading.
Jan 29 09:12:14 compute-0 systemd-rc-local-generator[77948]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:12:14 compute-0 systemd-sysv-generator[77954]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:12:15 compute-0 sudo[74147]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:15 compute-0 podman[77970]: 2026-01-29 09:12:15.30613618 +0000 UTC m=+0.041925722 container create 27578de93bbb8797500aa333409366194a3bb3775c96fe76b75b5298493e8c50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 29 09:12:15 compute-0 systemd[1]: Started libpod-conmon-27578de93bbb8797500aa333409366194a3bb3775c96fe76b75b5298493e8c50.scope.
Jan 29 09:12:15 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d302b50d469a1fc2f8ce3b7dffd890e503da9c01dbf39facdbf9932f0bbf31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d302b50d469a1fc2f8ce3b7dffd890e503da9c01dbf39facdbf9932f0bbf31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d302b50d469a1fc2f8ce3b7dffd890e503da9c01dbf39facdbf9932f0bbf31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d302b50d469a1fc2f8ce3b7dffd890e503da9c01dbf39facdbf9932f0bbf31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:15 compute-0 podman[77970]: 2026-01-29 09:12:15.288361781 +0000 UTC m=+0.024151353 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:12:15 compute-0 podman[77970]: 2026-01-29 09:12:15.39056632 +0000 UTC m=+0.126355902 container init 27578de93bbb8797500aa333409366194a3bb3775c96fe76b75b5298493e8c50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:12:15 compute-0 podman[77970]: 2026-01-29 09:12:15.397480526 +0000 UTC m=+0.133270078 container start 27578de93bbb8797500aa333409366194a3bb3775c96fe76b75b5298493e8c50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_carson, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:12:15 compute-0 podman[77970]: 2026-01-29 09:12:15.401522955 +0000 UTC m=+0.137312507 container attach 27578de93bbb8797500aa333409366194a3bb3775c96fe76b75b5298493e8c50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_carson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:12:15 compute-0 sudo[78015]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyomsurjnybsuvzgnwmeelstnlaxtgqf ; /usr/bin/python3'
Jan 29 09:12:15 compute-0 sudo[78015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:12:15 compute-0 python3[78017]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:12:15 compute-0 podman[78023]: 2026-01-29 09:12:15.714509454 +0000 UTC m=+0.031062799 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:15 compute-0 podman[78023]: 2026-01-29 09:12:15.830704671 +0000 UTC m=+0.147257986 container create d3353393c839ab498554ee8d45bf930121c7a3752816be249a04d0ef12bef57a (image=quay.io/ceph/ceph:v20, name=busy_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 09:12:15 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1581654331' entity='client.admin' 
Jan 29 09:12:15 compute-0 systemd[1]: Started libpod-conmon-d3353393c839ab498554ee8d45bf930121c7a3752816be249a04d0ef12bef57a.scope.
Jan 29 09:12:15 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091aa06e34f7c94debc2e301f9654de9028295a3822831c57342bc466b63acb8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091aa06e34f7c94debc2e301f9654de9028295a3822831c57342bc466b63acb8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:15 compute-0 podman[78023]: 2026-01-29 09:12:15.913960468 +0000 UTC m=+0.230513813 container init d3353393c839ab498554ee8d45bf930121c7a3752816be249a04d0ef12bef57a (image=quay.io/ceph/ceph:v20, name=busy_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 09:12:15 compute-0 podman[78023]: 2026-01-29 09:12:15.922871659 +0000 UTC m=+0.239424984 container start d3353393c839ab498554ee8d45bf930121c7a3752816be249a04d0ef12bef57a (image=quay.io/ceph/ceph:v20, name=busy_carson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 29 09:12:15 compute-0 podman[78023]: 2026-01-29 09:12:15.927331009 +0000 UTC m=+0.243884354 container attach d3353393c839ab498554ee8d45bf930121c7a3752816be249a04d0ef12bef57a (image=quay.io/ceph/ceph:v20, name=busy_carson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:12:15 compute-0 ceph-mgr[75473]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 29 09:12:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:15 compute-0 ceph-mon[75183]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 29 09:12:16 compute-0 charming_carson[77987]: [
Jan 29 09:12:16 compute-0 charming_carson[77987]:     {
Jan 29 09:12:16 compute-0 charming_carson[77987]:         "available": false,
Jan 29 09:12:16 compute-0 charming_carson[77987]:         "being_replaced": false,
Jan 29 09:12:16 compute-0 charming_carson[77987]:         "ceph_device_lvm": false,
Jan 29 09:12:16 compute-0 charming_carson[77987]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 29 09:12:16 compute-0 charming_carson[77987]:         "lsm_data": {},
Jan 29 09:12:16 compute-0 charming_carson[77987]:         "lvs": [],
Jan 29 09:12:16 compute-0 charming_carson[77987]:         "path": "/dev/sr0",
Jan 29 09:12:16 compute-0 charming_carson[77987]:         "rejected_reasons": [
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "Insufficient space (<5GB)",
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "Has a FileSystem"
Jan 29 09:12:16 compute-0 charming_carson[77987]:         ],
Jan 29 09:12:16 compute-0 charming_carson[77987]:         "sys_api": {
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "actuators": null,
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "device_nodes": [
Jan 29 09:12:16 compute-0 charming_carson[77987]:                 "sr0"
Jan 29 09:12:16 compute-0 charming_carson[77987]:             ],
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "devname": "sr0",
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "human_readable_size": "482.00 KB",
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "id_bus": "ata",
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "model": "QEMU DVD-ROM",
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "nr_requests": "2",
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "parent": "/dev/sr0",
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "partitions": {},
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "path": "/dev/sr0",
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "removable": "1",
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "rev": "2.5+",
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "ro": "0",
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "rotational": "1",
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "sas_address": "",
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "sas_device_handle": "",
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "scheduler_mode": "mq-deadline",
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "sectors": 0,
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "sectorsize": "2048",
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "size": 493568.0,
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "support_discard": "2048",
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "type": "disk",
Jan 29 09:12:16 compute-0 charming_carson[77987]:             "vendor": "QEMU"
Jan 29 09:12:16 compute-0 charming_carson[77987]:         }
Jan 29 09:12:16 compute-0 charming_carson[77987]:     }
Jan 29 09:12:16 compute-0 charming_carson[77987]: ]
Jan 29 09:12:16 compute-0 systemd[1]: libpod-27578de93bbb8797500aa333409366194a3bb3775c96fe76b75b5298493e8c50.scope: Deactivated successfully.
Jan 29 09:12:16 compute-0 conmon[77987]: conmon 27578de93bbb8797500a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-27578de93bbb8797500aa333409366194a3bb3775c96fe76b75b5298493e8c50.scope/container/memory.events
Jan 29 09:12:16 compute-0 podman[77970]: 2026-01-29 09:12:16.119363633 +0000 UTC m=+0.855153175 container died 27578de93bbb8797500aa333409366194a3bb3775c96fe76b75b5298493e8c50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_carson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:12:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3d302b50d469a1fc2f8ce3b7dffd890e503da9c01dbf39facdbf9932f0bbf31-merged.mount: Deactivated successfully.
Jan 29 09:12:16 compute-0 podman[77970]: 2026-01-29 09:12:16.169718002 +0000 UTC m=+0.905507544 container remove 27578de93bbb8797500aa333409366194a3bb3775c96fe76b75b5298493e8c50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_carson, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:12:16 compute-0 systemd[1]: libpod-conmon-27578de93bbb8797500aa333409366194a3bb3775c96fe76b75b5298493e8c50.scope: Deactivated successfully.
Jan 29 09:12:16 compute-0 sudo[77783]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:12:16 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:12:16 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:12:16 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:12:16 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 29 09:12:16 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 29 09:12:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:12:16 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:12:16 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:12:16 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 29 09:12:16 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 29 09:12:16 compute-0 sudo[78764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 29 09:12:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Jan 29 09:12:16 compute-0 sudo[78764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:16 compute-0 sudo[78764]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:16 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2429381940' entity='client.admin' 
Jan 29 09:12:16 compute-0 systemd[1]: libpod-d3353393c839ab498554ee8d45bf930121c7a3752816be249a04d0ef12bef57a.scope: Deactivated successfully.
Jan 29 09:12:16 compute-0 podman[78023]: 2026-01-29 09:12:16.352004683 +0000 UTC m=+0.668558008 container died d3353393c839ab498554ee8d45bf930121c7a3752816be249a04d0ef12bef57a (image=quay.io/ceph/ceph:v20, name=busy_carson, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:12:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-091aa06e34f7c94debc2e301f9654de9028295a3822831c57342bc466b63acb8-merged.mount: Deactivated successfully.
Jan 29 09:12:16 compute-0 sudo[78791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/etc/ceph
Jan 29 09:12:16 compute-0 podman[78023]: 2026-01-29 09:12:16.391775015 +0000 UTC m=+0.708328340 container remove d3353393c839ab498554ee8d45bf930121c7a3752816be249a04d0ef12bef57a (image=quay.io/ceph/ceph:v20, name=busy_carson, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:12:16 compute-0 sudo[78791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:16 compute-0 systemd[1]: libpod-conmon-d3353393c839ab498554ee8d45bf930121c7a3752816be249a04d0ef12bef57a.scope: Deactivated successfully.
Jan 29 09:12:16 compute-0 sudo[78791]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:16 compute-0 sudo[78015]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:16 compute-0 sudo[78829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/etc/ceph/ceph.conf.new
Jan 29 09:12:16 compute-0 sudo[78829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:16 compute-0 sudo[78829]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:16 compute-0 sudo[78854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:12:16 compute-0 sudo[78854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:16 compute-0 sudo[78854]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:16 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:16 compute-0 sudo[78879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/etc/ceph/ceph.conf.new
Jan 29 09:12:16 compute-0 sudo[78879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:16 compute-0 sudo[78879]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:16 compute-0 sudo[78927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/etc/ceph/ceph.conf.new
Jan 29 09:12:16 compute-0 sudo[78927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:16 compute-0 sudo[78927]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:16 compute-0 sudo[78952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/etc/ceph/ceph.conf.new
Jan 29 09:12:16 compute-0 sudo[78952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:16 compute-0 sudo[78952]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:16 compute-0 sudo[79000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 29 09:12:16 compute-0 sudo[79000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:16 compute-0 sudo[79000]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:16 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config/ceph.conf
Jan 29 09:12:16 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config/ceph.conf
Jan 29 09:12:16 compute-0 sudo[79054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config
Jan 29 09:12:16 compute-0 sudo[79054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:16 compute-0 sudo[79054]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:16 compute-0 ceph-mon[75183]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:16 compute-0 ceph-mon[75183]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 29 09:12:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 29 09:12:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:12:16 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2429381940' entity='client.admin' 
Jan 29 09:12:16 compute-0 sudo[79102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config
Jan 29 09:12:16 compute-0 sudo[79102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:16 compute-0 sudo[79102]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:16 compute-0 sudo[79127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config/ceph.conf.new
Jan 29 09:12:16 compute-0 sudo[79127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:16 compute-0 sudo[79127]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:16 compute-0 sudo[79152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:12:16 compute-0 sudo[79152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:16 compute-0 sudo[79152]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:17 compute-0 sudo[79177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config/ceph.conf.new
Jan 29 09:12:17 compute-0 sudo[79177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:17 compute-0 sudo[79177]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:17 compute-0 sudo[79248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config/ceph.conf.new
Jan 29 09:12:17 compute-0 sudo[79248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:17 compute-0 sudo[79248]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:17 compute-0 sudo[79345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfhpdppwrygqezjkqmevbsfuitouqilz ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769677936.7142112-36644-76536978952955/async_wrapper.py j610501086339 30 /home/zuul/.ansible/tmp/ansible-tmp-1769677936.7142112-36644-76536978952955/AnsiballZ_command.py _'
Jan 29 09:12:17 compute-0 sudo[79298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config/ceph.conf.new
Jan 29 09:12:17 compute-0 sudo[79298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:17 compute-0 sudo[79345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:12:17 compute-0 sudo[79298]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:17 compute-0 sudo[79350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config/ceph.conf.new /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config/ceph.conf
Jan 29 09:12:17 compute-0 sudo[79350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:17 compute-0 sudo[79350]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:17 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 29 09:12:17 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 29 09:12:17 compute-0 sudo[79375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 29 09:12:17 compute-0 sudo[79375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:17 compute-0 sudo[79375]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:17 compute-0 sudo[79400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/etc/ceph
Jan 29 09:12:17 compute-0 sudo[79400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:17 compute-0 sudo[79400]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:17 compute-0 ansible-async_wrapper.py[79349]: Invoked with j610501086339 30 /home/zuul/.ansible/tmp/ansible-tmp-1769677936.7142112-36644-76536978952955/AnsiballZ_command.py _
Jan 29 09:12:17 compute-0 ansible-async_wrapper.py[79431]: Starting module and watcher
Jan 29 09:12:17 compute-0 ansible-async_wrapper.py[79431]: Start watching 79433 (30)
Jan 29 09:12:17 compute-0 ansible-async_wrapper.py[79433]: Start module (79433)
Jan 29 09:12:17 compute-0 ansible-async_wrapper.py[79349]: Return async_wrapper task started.
Jan 29 09:12:17 compute-0 sudo[79345]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:17 compute-0 sudo[79425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/etc/ceph/ceph.client.admin.keyring.new
Jan 29 09:12:17 compute-0 sudo[79425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:17 compute-0 sudo[79425]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:17 compute-0 sudo[79455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:12:17 compute-0 sudo[79455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:17 compute-0 sudo[79455]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:17 compute-0 sudo[79480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/etc/ceph/ceph.client.admin.keyring.new
Jan 29 09:12:17 compute-0 sudo[79480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:17 compute-0 sudo[79480]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:17 compute-0 python3[79437]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:12:17 compute-0 podman[79509]: 2026-01-29 09:12:17.58293331 +0000 UTC m=+0.048912702 container create eb0d6e4202d60200b65d02f1579790a903c8ea0cd6d5000fcd4e51659748f422 (image=quay.io/ceph/ceph:v20, name=keen_bell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:12:17 compute-0 sudo[79539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/etc/ceph/ceph.client.admin.keyring.new
Jan 29 09:12:17 compute-0 sudo[79539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:17 compute-0 sudo[79539]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:17 compute-0 systemd[1]: Started libpod-conmon-eb0d6e4202d60200b65d02f1579790a903c8ea0cd6d5000fcd4e51659748f422.scope.
Jan 29 09:12:17 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b9479f80fe526e6fb94578dedb77069d9079ef0dc7c3afa1bb39458d712b98/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b9479f80fe526e6fb94578dedb77069d9079ef0dc7c3afa1bb39458d712b98/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:17 compute-0 sudo[79569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/etc/ceph/ceph.client.admin.keyring.new
Jan 29 09:12:17 compute-0 podman[79509]: 2026-01-29 09:12:17.565753496 +0000 UTC m=+0.031732918 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:17 compute-0 sudo[79569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:17 compute-0 sudo[79569]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:17 compute-0 podman[79509]: 2026-01-29 09:12:17.672093597 +0000 UTC m=+0.138073009 container init eb0d6e4202d60200b65d02f1579790a903c8ea0cd6d5000fcd4e51659748f422 (image=quay.io/ceph/ceph:v20, name=keen_bell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 29 09:12:17 compute-0 podman[79509]: 2026-01-29 09:12:17.67888075 +0000 UTC m=+0.144860142 container start eb0d6e4202d60200b65d02f1579790a903c8ea0cd6d5000fcd4e51659748f422 (image=quay.io/ceph/ceph:v20, name=keen_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 09:12:17 compute-0 podman[79509]: 2026-01-29 09:12:17.688815348 +0000 UTC m=+0.154794820 container attach eb0d6e4202d60200b65d02f1579790a903c8ea0cd6d5000fcd4e51659748f422 (image=quay.io/ceph/ceph:v20, name=keen_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:12:17 compute-0 sudo[79598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 29 09:12:17 compute-0 sudo[79598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:17 compute-0 sudo[79598]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:17 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config/ceph.client.admin.keyring
Jan 29 09:12:17 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config/ceph.client.admin.keyring
Jan 29 09:12:17 compute-0 sudo[79623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config
Jan 29 09:12:17 compute-0 sudo[79623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:17 compute-0 sudo[79623]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:17 compute-0 sudo[79650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config
Jan 29 09:12:17 compute-0 sudo[79650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:17 compute-0 sudo[79650]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:17 compute-0 ceph-mon[75183]: Updating compute-0:/etc/ceph/ceph.conf
Jan 29 09:12:17 compute-0 ceph-mon[75183]: Updating compute-0:/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config/ceph.conf
Jan 29 09:12:17 compute-0 sudo[79692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config/ceph.client.admin.keyring.new
Jan 29 09:12:17 compute-0 sudo[79692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:17 compute-0 sudo[79692]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:17 compute-0 sudo[79717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:12:17 compute-0 sudo[79717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:17 compute-0 sudo[79717]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:17 compute-0 sudo[79742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config/ceph.client.admin.keyring.new
Jan 29 09:12:17 compute-0 sudo[79742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:18 compute-0 sudo[79742]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:18 compute-0 sudo[79790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config/ceph.client.admin.keyring.new
Jan 29 09:12:18 compute-0 sudo[79790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:12:18 compute-0 sudo[79790]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:18 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 29 09:12:18 compute-0 keen_bell[79571]: 
Jan 29 09:12:18 compute-0 keen_bell[79571]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 29 09:12:18 compute-0 sudo[79815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config/ceph.client.admin.keyring.new
Jan 29 09:12:18 compute-0 sudo[79815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:18 compute-0 sudo[79815]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:18 compute-0 systemd[1]: libpod-eb0d6e4202d60200b65d02f1579790a903c8ea0cd6d5000fcd4e51659748f422.scope: Deactivated successfully.
Jan 29 09:12:18 compute-0 podman[79509]: 2026-01-29 09:12:18.162625418 +0000 UTC m=+0.628604820 container died eb0d6e4202d60200b65d02f1579790a903c8ea0cd6d5000fcd4e51659748f422 (image=quay.io/ceph/ceph:v20, name=keen_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 09:12:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5b9479f80fe526e6fb94578dedb77069d9079ef0dc7c3afa1bb39458d712b98-merged.mount: Deactivated successfully.
Jan 29 09:12:18 compute-0 podman[79509]: 2026-01-29 09:12:18.20346444 +0000 UTC m=+0.669443832 container remove eb0d6e4202d60200b65d02f1579790a903c8ea0cd6d5000fcd4e51659748f422 (image=quay.io/ceph/ceph:v20, name=keen_bell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 29 09:12:18 compute-0 sudo[79843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-3fdce3ca-565d-5459-88e8-1ffe58b48437/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config/ceph.client.admin.keyring.new /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config/ceph.client.admin.keyring
Jan 29 09:12:18 compute-0 sudo[79843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:18 compute-0 systemd[1]: libpod-conmon-eb0d6e4202d60200b65d02f1579790a903c8ea0cd6d5000fcd4e51659748f422.scope: Deactivated successfully.
Jan 29 09:12:18 compute-0 sudo[79843]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:18 compute-0 ansible-async_wrapper.py[79433]: Module complete (79433)
Jan 29 09:12:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:12:18 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:12:18 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:12:18 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:18 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev 0ea324b1-9053-48ad-ab74-e9002a4d189f (Updating crash deployment (+1 -> 1))
Jan 29 09:12:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 29 09:12:18 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 29 09:12:18 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 29 09:12:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:12:18 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:18 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 29 09:12:18 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 29 09:12:18 compute-0 sudo[79878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:12:18 compute-0 sudo[79878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:18 compute-0 sudo[79878]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:18 compute-0 sudo[79903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:12:18 compute-0 sudo[79903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:18 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:18 compute-0 sudo[79997]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuvvfzjmfjkcolquvdvmtmchuphwzogh ; /usr/bin/python3'
Jan 29 09:12:18 compute-0 sudo[79997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:12:18 compute-0 podman[80015]: 2026-01-29 09:12:18.727282361 +0000 UTC m=+0.039918249 container create d1231908001ac1175dea6fb37746c046e17e0eeb9d71d012600fb56d24afec0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ganguly, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:12:18 compute-0 systemd[1]: Started libpod-conmon-d1231908001ac1175dea6fb37746c046e17e0eeb9d71d012600fb56d24afec0a.scope.
Jan 29 09:12:18 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:18 compute-0 podman[80015]: 2026-01-29 09:12:18.706606362 +0000 UTC m=+0.019242270 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:12:18 compute-0 python3[80003]: ansible-ansible.legacy.async_status Invoked with jid=j610501086339.79349 mode=status _async_dir=/root/.ansible_async
Jan 29 09:12:18 compute-0 podman[80015]: 2026-01-29 09:12:18.833925189 +0000 UTC m=+0.146561097 container init d1231908001ac1175dea6fb37746c046e17e0eeb9d71d012600fb56d24afec0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:12:18 compute-0 podman[80015]: 2026-01-29 09:12:18.840086676 +0000 UTC m=+0.152722564 container start d1231908001ac1175dea6fb37746c046e17e0eeb9d71d012600fb56d24afec0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Jan 29 09:12:18 compute-0 sudo[79997]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:18 compute-0 upbeat_ganguly[80031]: 167 167
Jan 29 09:12:18 compute-0 systemd[1]: libpod-d1231908001ac1175dea6fb37746c046e17e0eeb9d71d012600fb56d24afec0a.scope: Deactivated successfully.
Jan 29 09:12:18 compute-0 podman[80015]: 2026-01-29 09:12:18.852597763 +0000 UTC m=+0.165233701 container attach d1231908001ac1175dea6fb37746c046e17e0eeb9d71d012600fb56d24afec0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:12:18 compute-0 podman[80015]: 2026-01-29 09:12:18.853027415 +0000 UTC m=+0.165663303 container died d1231908001ac1175dea6fb37746c046e17e0eeb9d71d012600fb56d24afec0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:12:18 compute-0 ceph-mon[75183]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 29 09:12:18 compute-0 ceph-mon[75183]: Updating compute-0:/var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/config/ceph.client.admin.keyring
Jan 29 09:12:18 compute-0 ceph-mon[75183]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:18 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:18 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:18 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:18 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 29 09:12:18 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 29 09:12:18 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ff8d4bc22e034c75c47046f850b71c4dfda62f359d9f99b921b7b96213dc5b9-merged.mount: Deactivated successfully.
Jan 29 09:12:18 compute-0 sudo[80096]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upqjszusstulffjivuevozhpcupsvryo ; /usr/bin/python3'
Jan 29 09:12:18 compute-0 sudo[80096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:12:18 compute-0 podman[80015]: 2026-01-29 09:12:18.965760138 +0000 UTC m=+0.278396026 container remove d1231908001ac1175dea6fb37746c046e17e0eeb9d71d012600fb56d24afec0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ganguly, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 29 09:12:18 compute-0 systemd[1]: libpod-conmon-d1231908001ac1175dea6fb37746c046e17e0eeb9d71d012600fb56d24afec0a.scope: Deactivated successfully.
Jan 29 09:12:19 compute-0 python3[80098]: ansible-ansible.legacy.async_status Invoked with jid=j610501086339.79349 mode=cleanup _async_dir=/root/.ansible_async
Jan 29 09:12:19 compute-0 sudo[80096]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:19 compute-0 systemd[1]: Reloading.
Jan 29 09:12:19 compute-0 systemd-sysv-generator[80127]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:12:19 compute-0 systemd-rc-local-generator[80119]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:12:19 compute-0 sudo[80157]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgofffkhkqdzqmgsdbxepffndiffoznf ; /usr/bin/python3'
Jan 29 09:12:19 compute-0 sudo[80157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:12:19 compute-0 systemd[1]: Reloading.
Jan 29 09:12:19 compute-0 systemd-rc-local-generator[80188]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:12:19 compute-0 systemd-sysv-generator[80192]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:12:19 compute-0 python3[80161]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 29 09:12:19 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 3fdce3ca-565d-5459-88e8-1ffe58b48437...
Jan 29 09:12:19 compute-0 sudo[80157]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:19 compute-0 ceph-mon[75183]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 29 09:12:19 compute-0 ceph-mon[75183]: Deploying daemon crash.compute-0 on compute-0
Jan 29 09:12:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:19 compute-0 podman[80251]: 2026-01-29 09:12:19.956665547 +0000 UTC m=+0.041647565 container create 8ed393bfd921a4743c4cf5cced2f03ebbb3def114e934ac9d6733b843bf5f38a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 29 09:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/507fbc6c9bbbc47b9f69f054744c311fe7912637907b224f1574014ac336fbaf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/507fbc6c9bbbc47b9f69f054744c311fe7912637907b224f1574014ac336fbaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/507fbc6c9bbbc47b9f69f054744c311fe7912637907b224f1574014ac336fbaf/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/507fbc6c9bbbc47b9f69f054744c311fe7912637907b224f1574014ac336fbaf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:20 compute-0 podman[80251]: 2026-01-29 09:12:20.019916573 +0000 UTC m=+0.104898611 container init 8ed393bfd921a4743c4cf5cced2f03ebbb3def114e934ac9d6733b843bf5f38a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:12:20 compute-0 podman[80251]: 2026-01-29 09:12:20.024894158 +0000 UTC m=+0.109876176 container start 8ed393bfd921a4743c4cf5cced2f03ebbb3def114e934ac9d6733b843bf5f38a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:12:20 compute-0 bash[80251]: 8ed393bfd921a4743c4cf5cced2f03ebbb3def114e934ac9d6733b843bf5f38a
Jan 29 09:12:20 compute-0 podman[80251]: 2026-01-29 09:12:19.935028473 +0000 UTC m=+0.020010521 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:12:20 compute-0 systemd[1]: Started Ceph crash.compute-0 for 3fdce3ca-565d-5459-88e8-1ffe58b48437.
Jan 29 09:12:20 compute-0 sudo[80294]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emcgwcamvkmpcrmbssravmahiemqwwsv ; /usr/bin/python3'
Jan 29 09:12:20 compute-0 sudo[80294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:12:20 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-crash-compute-0[80266]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 29 09:12:20 compute-0 sudo[79903]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:20 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:12:20 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:20 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:12:20 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:20 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 29 09:12:20 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:20 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev 0ea324b1-9053-48ad-ab74-e9002a4d189f (Updating crash deployment (+1 -> 1))
Jan 29 09:12:20 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event 0ea324b1-9053-48ad-ab74-e9002a4d189f (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 29 09:12:20 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 29 09:12:20 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:20 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 29 09:12:20 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-crash-compute-0[80266]: 2026-01-29T09:12:20.173+0000 7fe60c9ed640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 29 09:12:20 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-crash-compute-0[80266]: 2026-01-29T09:12:20.173+0000 7fe60c9ed640 -1 AuthRegistry(0x7fe604052d90) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 29 09:12:20 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-crash-compute-0[80266]: 2026-01-29T09:12:20.174+0000 7fe60c9ed640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 29 09:12:20 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-crash-compute-0[80266]: 2026-01-29T09:12:20.174+0000 7fe60c9ed640 -1 AuthRegistry(0x7fe60c9ebfe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 29 09:12:20 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-crash-compute-0[80266]: 2026-01-29T09:12:20.175+0000 7fe60a762640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 29 09:12:20 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-crash-compute-0[80266]: 2026-01-29T09:12:20.175+0000 7fe60c9ed640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 29 09:12:20 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-crash-compute-0[80266]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 29 09:12:20 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:20 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-crash-compute-0[80266]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 29 09:12:20 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev d740b154-4952-4ef7-80d5-4bc5fcf69691 (Updating mgr deployment (+1 -> 2))
Jan 29 09:12:20 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.azpxyn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 29 09:12:20 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.azpxyn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 29 09:12:20 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.azpxyn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 29 09:12:20 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 29 09:12:20 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mgr services"} : dispatch
Jan 29 09:12:20 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:12:20 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:20 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.azpxyn on compute-0
Jan 29 09:12:20 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.azpxyn on compute-0
Jan 29 09:12:20 compute-0 python3[80296]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:12:20 compute-0 sudo[80309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:12:20 compute-0 sudo[80309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:20 compute-0 sudo[80309]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:20 compute-0 podman[80332]: 2026-01-29 09:12:20.288496313 +0000 UTC m=+0.050831233 container create 5e3f188f3c8513255bb28070752fb155cf58cdcb77c7e82f6da2b1e2ba00bd95 (image=quay.io/ceph/ceph:v20, name=affectionate_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 09:12:20 compute-0 sudo[80344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:12:20 compute-0 sudo[80344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:20 compute-0 systemd[1]: Started libpod-conmon-5e3f188f3c8513255bb28070752fb155cf58cdcb77c7e82f6da2b1e2ba00bd95.scope.
Jan 29 09:12:20 compute-0 podman[80332]: 2026-01-29 09:12:20.262849351 +0000 UTC m=+0.025184291 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:20 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d50f3f9e1eb1af4b7677b70142b7a1cdc4c4a52962cc01fb5ce50685b80d318/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d50f3f9e1eb1af4b7677b70142b7a1cdc4c4a52962cc01fb5ce50685b80d318/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d50f3f9e1eb1af4b7677b70142b7a1cdc4c4a52962cc01fb5ce50685b80d318/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:20 compute-0 podman[80332]: 2026-01-29 09:12:20.384501185 +0000 UTC m=+0.146836135 container init 5e3f188f3c8513255bb28070752fb155cf58cdcb77c7e82f6da2b1e2ba00bd95 (image=quay.io/ceph/ceph:v20, name=affectionate_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 29 09:12:20 compute-0 podman[80332]: 2026-01-29 09:12:20.391563536 +0000 UTC m=+0.153898456 container start 5e3f188f3c8513255bb28070752fb155cf58cdcb77c7e82f6da2b1e2ba00bd95 (image=quay.io/ceph/ceph:v20, name=affectionate_hugle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 09:12:20 compute-0 podman[80332]: 2026-01-29 09:12:20.395497652 +0000 UTC m=+0.157832592 container attach 5e3f188f3c8513255bb28070752fb155cf58cdcb77c7e82f6da2b1e2ba00bd95 (image=quay.io/ceph/ceph:v20, name=affectionate_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:12:20 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:20 compute-0 podman[80437]: 2026-01-29 09:12:20.720597418 +0000 UTC m=+0.060913936 container create 925b4541364942d28b97d92652e7a93ee4bd3ddf2b77afd78b171863a8014d16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_wilson, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 29 09:12:20 compute-0 systemd[1]: Started libpod-conmon-925b4541364942d28b97d92652e7a93ee4bd3ddf2b77afd78b171863a8014d16.scope.
Jan 29 09:12:20 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:20 compute-0 podman[80437]: 2026-01-29 09:12:20.785005576 +0000 UTC m=+0.125322064 container init 925b4541364942d28b97d92652e7a93ee4bd3ddf2b77afd78b171863a8014d16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:12:20 compute-0 podman[80437]: 2026-01-29 09:12:20.69439957 +0000 UTC m=+0.034716148 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:12:20 compute-0 podman[80437]: 2026-01-29 09:12:20.790113644 +0000 UTC m=+0.130430122 container start 925b4541364942d28b97d92652e7a93ee4bd3ddf2b77afd78b171863a8014d16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_wilson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 09:12:20 compute-0 relaxed_wilson[80454]: 167 167
Jan 29 09:12:20 compute-0 systemd[1]: libpod-925b4541364942d28b97d92652e7a93ee4bd3ddf2b77afd78b171863a8014d16.scope: Deactivated successfully.
Jan 29 09:12:20 compute-0 podman[80437]: 2026-01-29 09:12:20.798501931 +0000 UTC m=+0.138818419 container attach 925b4541364942d28b97d92652e7a93ee4bd3ddf2b77afd78b171863a8014d16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 29 09:12:20 compute-0 podman[80437]: 2026-01-29 09:12:20.799021355 +0000 UTC m=+0.139337843 container died 925b4541364942d28b97d92652e7a93ee4bd3ddf2b77afd78b171863a8014d16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_wilson, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 09:12:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee74dff96ec5f6ee7853b65ad3e6254e31c1806c21fe0d16b288a805b2be6309-merged.mount: Deactivated successfully.
Jan 29 09:12:20 compute-0 podman[80437]: 2026-01-29 09:12:20.850220597 +0000 UTC m=+0.190537075 container remove 925b4541364942d28b97d92652e7a93ee4bd3ddf2b77afd78b171863a8014d16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_wilson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Jan 29 09:12:20 compute-0 systemd[1]: libpod-conmon-925b4541364942d28b97d92652e7a93ee4bd3ddf2b77afd78b171863a8014d16.scope: Deactivated successfully.
Jan 29 09:12:20 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 29 09:12:20 compute-0 affectionate_hugle[80374]: 
Jan 29 09:12:20 compute-0 affectionate_hugle[80374]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 29 09:12:20 compute-0 systemd[1]: libpod-5e3f188f3c8513255bb28070752fb155cf58cdcb77c7e82f6da2b1e2ba00bd95.scope: Deactivated successfully.
Jan 29 09:12:20 compute-0 podman[80332]: 2026-01-29 09:12:20.897175284 +0000 UTC m=+0.659510224 container died 5e3f188f3c8513255bb28070752fb155cf58cdcb77c7e82f6da2b1e2ba00bd95 (image=quay.io/ceph/ceph:v20, name=affectionate_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:12:20 compute-0 ceph-mon[75183]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:20 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:20 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:20 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:20 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:20 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:20 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.azpxyn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 29 09:12:20 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.azpxyn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 29 09:12:20 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mgr services"} : dispatch
Jan 29 09:12:20 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:20 compute-0 systemd[1]: Reloading.
Jan 29 09:12:20 compute-0 systemd-sysv-generator[80510]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:12:20 compute-0 systemd-rc-local-generator[80504]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:12:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d50f3f9e1eb1af4b7677b70142b7a1cdc4c4a52962cc01fb5ce50685b80d318-merged.mount: Deactivated successfully.
Jan 29 09:12:21 compute-0 podman[80332]: 2026-01-29 09:12:21.168973641 +0000 UTC m=+0.931308561 container remove 5e3f188f3c8513255bb28070752fb155cf58cdcb77c7e82f6da2b1e2ba00bd95 (image=quay.io/ceph/ceph:v20, name=affectionate_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:12:21 compute-0 systemd[1]: libpod-conmon-5e3f188f3c8513255bb28070752fb155cf58cdcb77c7e82f6da2b1e2ba00bd95.scope: Deactivated successfully.
Jan 29 09:12:21 compute-0 sudo[80294]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:21 compute-0 systemd[1]: Reloading.
Jan 29 09:12:21 compute-0 systemd-rc-local-generator[80554]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:12:21 compute-0 systemd-sysv-generator[80558]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:12:21 compute-0 systemd[1]: Starting Ceph mgr.compute-0.azpxyn for 3fdce3ca-565d-5459-88e8-1ffe58b48437...
Jan 29 09:12:21 compute-0 sudo[80588]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxrotfrcpuxkvzkqdapsvwhfbirucchu ; /usr/bin/python3'
Jan 29 09:12:21 compute-0 sudo[80588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:12:21 compute-0 ceph-mgr[75473]: [progress INFO root] Writing back 1 completed events
Jan 29 09:12:21 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 29 09:12:21 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:21 compute-0 python3[80593]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:12:21 compute-0 podman[80637]: 2026-01-29 09:12:21.667351905 +0000 UTC m=+0.036568809 container create 5f9301762b7e24928d43611d6e46758027d464f09d64c69ac56a20291718b746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-azpxyn, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 29 09:12:21 compute-0 podman[80638]: 2026-01-29 09:12:21.685474724 +0000 UTC m=+0.042957951 container create f152966214ea82e98bb144fa44f52b77bff882bb5f3115f63cd99aff1d7a19b6 (image=quay.io/ceph/ceph:v20, name=amazing_mcnulty, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07aaf68d0bf68785636ff2f03f9379bf4fce370e68ebffe7ebbb4c5b209cefb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07aaf68d0bf68785636ff2f03f9379bf4fce370e68ebffe7ebbb4c5b209cefb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07aaf68d0bf68785636ff2f03f9379bf4fce370e68ebffe7ebbb4c5b209cefb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07aaf68d0bf68785636ff2f03f9379bf4fce370e68ebffe7ebbb4c5b209cefb/merged/var/lib/ceph/mgr/ceph-compute-0.azpxyn supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:21 compute-0 systemd[1]: Started libpod-conmon-f152966214ea82e98bb144fa44f52b77bff882bb5f3115f63cd99aff1d7a19b6.scope.
Jan 29 09:12:21 compute-0 podman[80637]: 2026-01-29 09:12:21.739084191 +0000 UTC m=+0.108301115 container init 5f9301762b7e24928d43611d6e46758027d464f09d64c69ac56a20291718b746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-azpxyn, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 09:12:21 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:21 compute-0 podman[80637]: 2026-01-29 09:12:21.745682749 +0000 UTC m=+0.114899653 container start 5f9301762b7e24928d43611d6e46758027d464f09d64c69ac56a20291718b746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-azpxyn, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15edd1368d8dc8995236683f3c0f30e8d59f70c3812caab80714632c166708a7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15edd1368d8dc8995236683f3c0f30e8d59f70c3812caab80714632c166708a7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15edd1368d8dc8995236683f3c0f30e8d59f70c3812caab80714632c166708a7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:21 compute-0 podman[80637]: 2026-01-29 09:12:21.649848802 +0000 UTC m=+0.019065736 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:12:21 compute-0 bash[80637]: 5f9301762b7e24928d43611d6e46758027d464f09d64c69ac56a20291718b746
Jan 29 09:12:21 compute-0 systemd[1]: Started Ceph mgr.compute-0.azpxyn for 3fdce3ca-565d-5459-88e8-1ffe58b48437.
Jan 29 09:12:21 compute-0 podman[80638]: 2026-01-29 09:12:21.666979965 +0000 UTC m=+0.024463212 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:21 compute-0 podman[80638]: 2026-01-29 09:12:21.766927783 +0000 UTC m=+0.124411030 container init f152966214ea82e98bb144fa44f52b77bff882bb5f3115f63cd99aff1d7a19b6 (image=quay.io/ceph/ceph:v20, name=amazing_mcnulty, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 29 09:12:21 compute-0 podman[80638]: 2026-01-29 09:12:21.781716162 +0000 UTC m=+0.139199389 container start f152966214ea82e98bb144fa44f52b77bff882bb5f3115f63cd99aff1d7a19b6 (image=quay.io/ceph/ceph:v20, name=amazing_mcnulty, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:12:21 compute-0 podman[80638]: 2026-01-29 09:12:21.786016468 +0000 UTC m=+0.143499695 container attach f152966214ea82e98bb144fa44f52b77bff882bb5f3115f63cd99aff1d7a19b6 (image=quay.io/ceph/ceph:v20, name=amazing_mcnulty, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 29 09:12:21 compute-0 ceph-mgr[80674]: set uid:gid to 167:167 (ceph:ceph)
Jan 29 09:12:21 compute-0 ceph-mgr[80674]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 29 09:12:21 compute-0 ceph-mgr[80674]: pidfile_write: ignore empty --pid-file
Jan 29 09:12:21 compute-0 sudo[80344]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:21 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:12:21 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:21 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:12:21 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'alerts'
Jan 29 09:12:21 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:21 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 29 09:12:21 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:21 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev d740b154-4952-4ef7-80d5-4bc5fcf69691 (Updating mgr deployment (+1 -> 2))
Jan 29 09:12:21 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event d740b154-4952-4ef7-80d5-4bc5fcf69691 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Jan 29 09:12:21 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 29 09:12:21 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:21 compute-0 ceph-mon[75183]: Deploying daemon mgr.compute-0.azpxyn on compute-0
Jan 29 09:12:21 compute-0 ceph-mon[75183]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 29 09:12:21 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:21 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:21 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:21 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:21 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:21 compute-0 sudo[80698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:12:21 compute-0 sudo[80698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:21 compute-0 sudo[80698]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:21 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'balancer'
Jan 29 09:12:21 compute-0 sudo[80740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:12:21 compute-0 sudo[80740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:21 compute-0 sudo[80740]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:22 compute-0 sudo[80765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 29 09:12:22 compute-0 sudo[80765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:22 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'cephadm'
Jan 29 09:12:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Jan 29 09:12:22 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1396465038' entity='client.admin' 
Jan 29 09:12:22 compute-0 systemd[1]: libpod-f152966214ea82e98bb144fa44f52b77bff882bb5f3115f63cd99aff1d7a19b6.scope: Deactivated successfully.
Jan 29 09:12:22 compute-0 podman[80803]: 2026-01-29 09:12:22.320994209 +0000 UTC m=+0.029787985 container died f152966214ea82e98bb144fa44f52b77bff882bb5f3115f63cd99aff1d7a19b6 (image=quay.io/ceph/ceph:v20, name=amazing_mcnulty, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:12:22 compute-0 ansible-async_wrapper.py[79431]: Done in kid B.
Jan 29 09:12:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-15edd1368d8dc8995236683f3c0f30e8d59f70c3812caab80714632c166708a7-merged.mount: Deactivated successfully.
Jan 29 09:12:22 compute-0 podman[80803]: 2026-01-29 09:12:22.40883718 +0000 UTC m=+0.117630926 container remove f152966214ea82e98bb144fa44f52b77bff882bb5f3115f63cd99aff1d7a19b6 (image=quay.io/ceph/ceph:v20, name=amazing_mcnulty, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 29 09:12:22 compute-0 systemd[1]: libpod-conmon-f152966214ea82e98bb144fa44f52b77bff882bb5f3115f63cd99aff1d7a19b6.scope: Deactivated successfully.
Jan 29 09:12:22 compute-0 sudo[80588]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:22 compute-0 podman[80847]: 2026-01-29 09:12:22.526571979 +0000 UTC m=+0.062633742 container exec 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 29 09:12:22 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:22 compute-0 sudo[80901]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qljirarbozxakahmtfxvvejoazhwzldi ; /usr/bin/python3'
Jan 29 09:12:22 compute-0 sudo[80901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:12:22 compute-0 podman[80847]: 2026-01-29 09:12:22.650090533 +0000 UTC m=+0.186152276 container exec_died 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:12:22 compute-0 python3[80903]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:12:22 compute-0 podman[80930]: 2026-01-29 09:12:22.777284817 +0000 UTC m=+0.047291588 container create 6520d9e4cd463d69ff8cabf94396754a8c829c773d6c62b57e816d0da0f58a4d (image=quay.io/ceph/ceph:v20, name=eloquent_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 29 09:12:22 compute-0 systemd[1]: Started libpod-conmon-6520d9e4cd463d69ff8cabf94396754a8c829c773d6c62b57e816d0da0f58a4d.scope.
Jan 29 09:12:22 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:22 compute-0 podman[80930]: 2026-01-29 09:12:22.756398703 +0000 UTC m=+0.026405394 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d3271680ed8e904ccb0c79925b457b371cae50015b54fd8f5c794626e48e181/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d3271680ed8e904ccb0c79925b457b371cae50015b54fd8f5c794626e48e181/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d3271680ed8e904ccb0c79925b457b371cae50015b54fd8f5c794626e48e181/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:22 compute-0 podman[80930]: 2026-01-29 09:12:22.949462804 +0000 UTC m=+0.219469485 container init 6520d9e4cd463d69ff8cabf94396754a8c829c773d6c62b57e816d0da0f58a4d (image=quay.io/ceph/ceph:v20, name=eloquent_sanderson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:12:22 compute-0 podman[80930]: 2026-01-29 09:12:22.966156805 +0000 UTC m=+0.236163466 container start 6520d9e4cd463d69ff8cabf94396754a8c829c773d6c62b57e816d0da0f58a4d (image=quay.io/ceph/ceph:v20, name=eloquent_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:12:22 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'crash'
Jan 29 09:12:23 compute-0 podman[80930]: 2026-01-29 09:12:23.002797654 +0000 UTC m=+0.272804325 container attach 6520d9e4cd463d69ff8cabf94396754a8c829c773d6c62b57e816d0da0f58a4d (image=quay.io/ceph/ceph:v20, name=eloquent_sanderson, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 29 09:12:23 compute-0 ceph-mon[75183]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:23 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1396465038' entity='client.admin' 
Jan 29 09:12:23 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'dashboard'
Jan 29 09:12:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:12:23 compute-0 sudo[80765]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:12:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Jan 29 09:12:23 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:12:23 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3073397346' entity='client.admin' 
Jan 29 09:12:23 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:12:23 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:12:23 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:12:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:12:23 compute-0 systemd[1]: libpod-6520d9e4cd463d69ff8cabf94396754a8c829c773d6c62b57e816d0da0f58a4d.scope: Deactivated successfully.
Jan 29 09:12:23 compute-0 podman[80930]: 2026-01-29 09:12:23.477845838 +0000 UTC m=+0.747852499 container died 6520d9e4cd463d69ff8cabf94396754a8c829c773d6c62b57e816d0da0f58a4d (image=quay.io/ceph/ceph:v20, name=eloquent_sanderson, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:12:23 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d3271680ed8e904ccb0c79925b457b371cae50015b54fd8f5c794626e48e181-merged.mount: Deactivated successfully.
Jan 29 09:12:23 compute-0 podman[80930]: 2026-01-29 09:12:23.58239493 +0000 UTC m=+0.852401591 container remove 6520d9e4cd463d69ff8cabf94396754a8c829c773d6c62b57e816d0da0f58a4d (image=quay.io/ceph/ceph:v20, name=eloquent_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 29 09:12:23 compute-0 sudo[81047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:12:23 compute-0 systemd[1]: libpod-conmon-6520d9e4cd463d69ff8cabf94396754a8c829c773d6c62b57e816d0da0f58a4d.scope: Deactivated successfully.
Jan 29 09:12:23 compute-0 sudo[81047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:23 compute-0 sudo[81047]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:23 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 29 09:12:23 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 29 09:12:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 29 09:12:23 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 29 09:12:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 29 09:12:23 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 29 09:12:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:12:23 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:23 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 29 09:12:23 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 29 09:12:23 compute-0 sudo[80901]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:23 compute-0 sudo[81073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:12:23 compute-0 sudo[81073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:23 compute-0 sudo[81073]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:23 compute-0 sudo[81098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:12:23 compute-0 sudo[81098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:23 compute-0 sudo[81146]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlmrlkwogloxlehniyvlowwyhhjvoeek ; /usr/bin/python3'
Jan 29 09:12:23 compute-0 sudo[81146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:12:23 compute-0 python3[81148]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:12:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:23 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'devicehealth'
Jan 29 09:12:24 compute-0 podman[81156]: 2026-01-29 09:12:24.002419928 +0000 UTC m=+0.058835840 container create 3a311603f631b2c05db12e0a8714eb238356f97d03b638cb1ea77289ae7cff57 (image=quay.io/ceph/ceph:v20, name=gracious_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 29 09:12:24 compute-0 systemd[1]: Started libpod-conmon-3a311603f631b2c05db12e0a8714eb238356f97d03b638cb1ea77289ae7cff57.scope.
Jan 29 09:12:24 compute-0 podman[81177]: 2026-01-29 09:12:24.043164887 +0000 UTC m=+0.057382670 container create d041d4cc82e1c922a737aa554187bc3534ea5e270de762827d5f0e4c44a50a87 (image=quay.io/ceph/ceph:v20, name=interesting_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:12:24 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'diskprediction_local'
Jan 29 09:12:24 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70fed53ef619b9efde13ce672568fbff4532db7282324e35edb315daa04d9974/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70fed53ef619b9efde13ce672568fbff4532db7282324e35edb315daa04d9974/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70fed53ef619b9efde13ce672568fbff4532db7282324e35edb315daa04d9974/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:24 compute-0 systemd[1]: Started libpod-conmon-d041d4cc82e1c922a737aa554187bc3534ea5e270de762827d5f0e4c44a50a87.scope.
Jan 29 09:12:24 compute-0 podman[81156]: 2026-01-29 09:12:23.976784176 +0000 UTC m=+0.033200118 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:24 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:24 compute-0 podman[81156]: 2026-01-29 09:12:24.103916377 +0000 UTC m=+0.160332319 container init 3a311603f631b2c05db12e0a8714eb238356f97d03b638cb1ea77289ae7cff57 (image=quay.io/ceph/ceph:v20, name=gracious_germain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 29 09:12:24 compute-0 podman[81156]: 2026-01-29 09:12:24.109019435 +0000 UTC m=+0.165435357 container start 3a311603f631b2c05db12e0a8714eb238356f97d03b638cb1ea77289ae7cff57 (image=quay.io/ceph/ceph:v20, name=gracious_germain, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 09:12:24 compute-0 podman[81177]: 2026-01-29 09:12:24.01547223 +0000 UTC m=+0.029690043 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:24 compute-0 podman[81177]: 2026-01-29 09:12:24.113517727 +0000 UTC m=+0.127735520 container init d041d4cc82e1c922a737aa554187bc3534ea5e270de762827d5f0e4c44a50a87 (image=quay.io/ceph/ceph:v20, name=interesting_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 29 09:12:24 compute-0 podman[81156]: 2026-01-29 09:12:24.113985499 +0000 UTC m=+0.170401431 container attach 3a311603f631b2c05db12e0a8714eb238356f97d03b638cb1ea77289ae7cff57 (image=quay.io/ceph/ceph:v20, name=gracious_germain, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 29 09:12:24 compute-0 podman[81177]: 2026-01-29 09:12:24.117467213 +0000 UTC m=+0.131684996 container start d041d4cc82e1c922a737aa554187bc3534ea5e270de762827d5f0e4c44a50a87 (image=quay.io/ceph/ceph:v20, name=interesting_wescoff, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 09:12:24 compute-0 interesting_wescoff[81200]: 167 167
Jan 29 09:12:24 compute-0 systemd[1]: libpod-d041d4cc82e1c922a737aa554187bc3534ea5e270de762827d5f0e4c44a50a87.scope: Deactivated successfully.
Jan 29 09:12:24 compute-0 podman[81177]: 2026-01-29 09:12:24.171278256 +0000 UTC m=+0.185496049 container attach d041d4cc82e1c922a737aa554187bc3534ea5e270de762827d5f0e4c44a50a87 (image=quay.io/ceph/ceph:v20, name=interesting_wescoff, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 29 09:12:24 compute-0 podman[81177]: 2026-01-29 09:12:24.172124799 +0000 UTC m=+0.186342582 container died d041d4cc82e1c922a737aa554187bc3534ea5e270de762827d5f0e4c44a50a87 (image=quay.io/ceph/ceph:v20, name=interesting_wescoff, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True)
Jan 29 09:12:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e9f2ab6bf58682645bf56455fa7c7416a06e45390d5f20ef300915e01f68008-merged.mount: Deactivated successfully.
Jan 29 09:12:24 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-azpxyn[80665]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 29 09:12:24 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-azpxyn[80665]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 29 09:12:24 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-azpxyn[80665]:   from numpy import show_config as show_numpy_config
Jan 29 09:12:24 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'influx'
Jan 29 09:12:24 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'insights'
Jan 29 09:12:24 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'iostat'
Jan 29 09:12:24 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:24 compute-0 podman[81177]: 2026-01-29 09:12:24.562049264 +0000 UTC m=+0.576267067 container remove d041d4cc82e1c922a737aa554187bc3534ea5e270de762827d5f0e4c44a50a87 (image=quay.io/ceph/ceph:v20, name=interesting_wescoff, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 29 09:12:24 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:24 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3073397346' entity='client.admin' 
Jan 29 09:12:24 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:24 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:24 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:12:24 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:24 compute-0 ceph-mon[75183]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 29 09:12:24 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 29 09:12:24 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 29 09:12:24 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:24 compute-0 ceph-mon[75183]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 29 09:12:24 compute-0 ceph-mon[75183]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:24 compute-0 systemd[1]: libpod-conmon-d041d4cc82e1c922a737aa554187bc3534ea5e270de762827d5f0e4c44a50a87.scope: Deactivated successfully.
Jan 29 09:12:24 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'k8sevents'
Jan 29 09:12:24 compute-0 sudo[81098]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:12:24 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Jan 29 09:12:24 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3469536472' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 29 09:12:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:12:24 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:24 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.ucpkkb (unknown last config time)...
Jan 29 09:12:24 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.ucpkkb (unknown last config time)...
Jan 29 09:12:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ucpkkb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 29 09:12:24 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.ucpkkb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 29 09:12:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 29 09:12:24 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mgr services"} : dispatch
Jan 29 09:12:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:12:24 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:24 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.ucpkkb on compute-0
Jan 29 09:12:24 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.ucpkkb on compute-0
Jan 29 09:12:24 compute-0 sudo[81242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:12:24 compute-0 sudo[81242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:24 compute-0 sudo[81242]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:24 compute-0 sudo[81267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:12:24 compute-0 sudo[81267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:25 compute-0 podman[81308]: 2026-01-29 09:12:25.110011186 +0000 UTC m=+0.057982436 container create 2347d91550bb50a2013af678e11d31fb71b6f0fc78bec5f5a59e722349a1cc2f (image=quay.io/ceph/ceph:v20, name=thirsty_haslett, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:12:25 compute-0 systemd[1]: Started libpod-conmon-2347d91550bb50a2013af678e11d31fb71b6f0fc78bec5f5a59e722349a1cc2f.scope.
Jan 29 09:12:25 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:25 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'localpool'
Jan 29 09:12:25 compute-0 podman[81308]: 2026-01-29 09:12:25.092180534 +0000 UTC m=+0.040151804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:25 compute-0 podman[81308]: 2026-01-29 09:12:25.20352282 +0000 UTC m=+0.151494100 container init 2347d91550bb50a2013af678e11d31fb71b6f0fc78bec5f5a59e722349a1cc2f (image=quay.io/ceph/ceph:v20, name=thirsty_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 29 09:12:25 compute-0 podman[81308]: 2026-01-29 09:12:25.210500518 +0000 UTC m=+0.158471768 container start 2347d91550bb50a2013af678e11d31fb71b6f0fc78bec5f5a59e722349a1cc2f (image=quay.io/ceph/ceph:v20, name=thirsty_haslett, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:12:25 compute-0 thirsty_haslett[81324]: 167 167
Jan 29 09:12:25 compute-0 systemd[1]: libpod-2347d91550bb50a2013af678e11d31fb71b6f0fc78bec5f5a59e722349a1cc2f.scope: Deactivated successfully.
Jan 29 09:12:25 compute-0 podman[81308]: 2026-01-29 09:12:25.217881688 +0000 UTC m=+0.165852968 container attach 2347d91550bb50a2013af678e11d31fb71b6f0fc78bec5f5a59e722349a1cc2f (image=quay.io/ceph/ceph:v20, name=thirsty_haslett, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:12:25 compute-0 podman[81308]: 2026-01-29 09:12:25.218424572 +0000 UTC m=+0.166395842 container died 2347d91550bb50a2013af678e11d31fb71b6f0fc78bec5f5a59e722349a1cc2f (image=quay.io/ceph/ceph:v20, name=thirsty_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:12:25 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'mds_autoscaler'
Jan 29 09:12:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-50c886f95c6b4b2474e0539ee7002bc36eb7dab87eb067f770700dbbdd864aeb-merged.mount: Deactivated successfully.
Jan 29 09:12:25 compute-0 podman[81308]: 2026-01-29 09:12:25.355722959 +0000 UTC m=+0.303694209 container remove 2347d91550bb50a2013af678e11d31fb71b6f0fc78bec5f5a59e722349a1cc2f (image=quay.io/ceph/ceph:v20, name=thirsty_haslett, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:12:25 compute-0 systemd[1]: libpod-conmon-2347d91550bb50a2013af678e11d31fb71b6f0fc78bec5f5a59e722349a1cc2f.scope: Deactivated successfully.
Jan 29 09:12:25 compute-0 sudo[81267]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:25 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:12:25 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:25 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:12:25 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:25 compute-0 sudo[81340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:12:25 compute-0 sudo[81340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:25 compute-0 sudo[81340]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:25 compute-0 sudo[81365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 29 09:12:25 compute-0 sudo[81365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:25 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'mirroring'
Jan 29 09:12:25 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 29 09:12:25 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 09:12:25 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:25 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3469536472' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 29 09:12:25 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:25 compute-0 ceph-mon[75183]: Reconfiguring mgr.compute-0.ucpkkb (unknown last config time)...
Jan 29 09:12:25 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.ucpkkb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 29 09:12:25 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mgr services"} : dispatch
Jan 29 09:12:25 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:25 compute-0 ceph-mon[75183]: Reconfiguring daemon mgr.compute-0.ucpkkb on compute-0
Jan 29 09:12:25 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:25 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:25 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3469536472' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 29 09:12:25 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 29 09:12:25 compute-0 gracious_germain[81195]: set require_min_compat_client to mimic
Jan 29 09:12:25 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 29 09:12:25 compute-0 systemd[1]: libpod-3a311603f631b2c05db12e0a8714eb238356f97d03b638cb1ea77289ae7cff57.scope: Deactivated successfully.
Jan 29 09:12:25 compute-0 podman[81156]: 2026-01-29 09:12:25.667586637 +0000 UTC m=+1.724002539 container died 3a311603f631b2c05db12e0a8714eb238356f97d03b638cb1ea77289ae7cff57 (image=quay.io/ceph/ceph:v20, name=gracious_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 09:12:25 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'nfs'
Jan 29 09:12:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-70fed53ef619b9efde13ce672568fbff4532db7282324e35edb315daa04d9974-merged.mount: Deactivated successfully.
Jan 29 09:12:25 compute-0 podman[81156]: 2026-01-29 09:12:25.714052391 +0000 UTC m=+1.770468303 container remove 3a311603f631b2c05db12e0a8714eb238356f97d03b638cb1ea77289ae7cff57 (image=quay.io/ceph/ceph:v20, name=gracious_germain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 09:12:25 compute-0 systemd[1]: libpod-conmon-3a311603f631b2c05db12e0a8714eb238356f97d03b638cb1ea77289ae7cff57.scope: Deactivated successfully.
Jan 29 09:12:25 compute-0 sudo[81146]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:25 compute-0 podman[81448]: 2026-01-29 09:12:25.964863322 +0000 UTC m=+0.055412577 container exec 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 09:12:25 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'orchestrator'
Jan 29 09:12:26 compute-0 podman[81448]: 2026-01-29 09:12:26.05556824 +0000 UTC m=+0.146117485 container exec_died 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:12:26 compute-0 sudo[81517]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsjvfxihadhfvoqfhrczkvxwsbyjkhqh ; /usr/bin/python3'
Jan 29 09:12:26 compute-0 sudo[81517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:12:26 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'osd_perf_query'
Jan 29 09:12:26 compute-0 python3[81521]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:12:26 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'osd_support'
Jan 29 09:12:26 compute-0 podman[81555]: 2026-01-29 09:12:26.335288451 +0000 UTC m=+0.052081167 container create d50ad02ef604cee7459907a8be239de20484a7b7c5c9bb95130d077b7bf51fcd (image=quay.io/ceph/ceph:v20, name=great_yalow, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 09:12:26 compute-0 systemd[1]: Started libpod-conmon-d50ad02ef604cee7459907a8be239de20484a7b7c5c9bb95130d077b7bf51fcd.scope.
Jan 29 09:12:26 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34910781206fe162165e88266a05240b987d15894f96c02be8b0262a1e098758/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34910781206fe162165e88266a05240b987d15894f96c02be8b0262a1e098758/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:26 compute-0 podman[81555]: 2026-01-29 09:12:26.320527293 +0000 UTC m=+0.037320039 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34910781206fe162165e88266a05240b987d15894f96c02be8b0262a1e098758/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:26 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'pg_autoscaler'
Jan 29 09:12:26 compute-0 podman[81555]: 2026-01-29 09:12:26.433278657 +0000 UTC m=+0.150071393 container init d50ad02ef604cee7459907a8be239de20484a7b7c5c9bb95130d077b7bf51fcd (image=quay.io/ceph/ceph:v20, name=great_yalow, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 09:12:26 compute-0 podman[81555]: 2026-01-29 09:12:26.439538845 +0000 UTC m=+0.156331561 container start d50ad02ef604cee7459907a8be239de20484a7b7c5c9bb95130d077b7bf51fcd (image=quay.io/ceph/ceph:v20, name=great_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 29 09:12:26 compute-0 podman[81555]: 2026-01-29 09:12:26.453279776 +0000 UTC m=+0.170072512 container attach d50ad02ef604cee7459907a8be239de20484a7b7c5c9bb95130d077b7bf51fcd (image=quay.io/ceph/ceph:v20, name=great_yalow, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 09:12:26 compute-0 sudo[81365]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:26 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:12:26 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:26 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:12:26 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:26 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:12:26 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:26 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:12:26 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:12:26 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:12:26 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:26 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'progress'
Jan 29 09:12:26 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:12:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:12:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:12:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:12:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:12:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:12:26 compute-0 ceph-mgr[75473]: [progress INFO root] Writing back 2 completed events
Jan 29 09:12:26 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 29 09:12:26 compute-0 sudo[81602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:12:26 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:26 compute-0 sudo[81602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:26 compute-0 sudo[81602]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:26 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'prometheus'
Jan 29 09:12:26 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3469536472' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 29 09:12:26 compute-0 ceph-mon[75183]: osdmap e3: 0 total, 0 up, 0 in
Jan 29 09:12:26 compute-0 ceph-mon[75183]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:26 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:26 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:26 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:26 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:12:26 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:26 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:26 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:26 compute-0 sudo[81647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:12:26 compute-0 sudo[81647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:26 compute-0 sudo[81647]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:26 compute-0 sudo[81672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 29 09:12:26 compute-0 sudo[81672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:27 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'rbd_support'
Jan 29 09:12:27 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'rgw'
Jan 29 09:12:27 compute-0 sudo[81672]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 29 09:12:27 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 29 09:12:27 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 29 09:12:27 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 29 09:12:27 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:27 compute-0 ceph-mgr[75473]: [cephadm INFO root] Added host compute-0
Jan 29 09:12:27 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 29 09:12:27 compute-0 ceph-mgr[75473]: [cephadm INFO root] Saving service mon spec with placement compute-0
Jan 29 09:12:27 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Jan 29 09:12:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:12:27 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 29 09:12:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:12:27 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:12:27 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:12:27 compute-0 ceph-mgr[75473]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Jan 29 09:12:27 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Jan 29 09:12:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 29 09:12:27 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 29 09:12:27 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:27 compute-0 ceph-mgr[75473]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 29 09:12:27 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 29 09:12:27 compute-0 ceph-mgr[75473]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Jan 29 09:12:27 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Jan 29 09:12:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Jan 29 09:12:27 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:27 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev dd7315e4-28d8-4a1b-9e38-4fd71a2cce67 (Updating mgr deployment (-1 -> 1))
Jan 29 09:12:27 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.azpxyn from compute-0 -- ports [8765]
Jan 29 09:12:27 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.azpxyn from compute-0 -- ports [8765]
Jan 29 09:12:27 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:27 compute-0 great_yalow[81592]: Added host 'compute-0' with addr '192.168.122.100'
Jan 29 09:12:27 compute-0 great_yalow[81592]: Scheduled mon update...
Jan 29 09:12:27 compute-0 great_yalow[81592]: Scheduled mgr update...
Jan 29 09:12:27 compute-0 great_yalow[81592]: Scheduled osd.default_drive_group update...
Jan 29 09:12:27 compute-0 systemd[1]: libpod-d50ad02ef604cee7459907a8be239de20484a7b7c5c9bb95130d077b7bf51fcd.scope: Deactivated successfully.
Jan 29 09:12:27 compute-0 podman[81555]: 2026-01-29 09:12:27.427216487 +0000 UTC m=+1.144009203 container died d50ad02ef604cee7459907a8be239de20484a7b7c5c9bb95130d077b7bf51fcd (image=quay.io/ceph/ceph:v20, name=great_yalow, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 29 09:12:27 compute-0 sudo[81717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:12:27 compute-0 sudo[81717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:27 compute-0 sudo[81717]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-34910781206fe162165e88266a05240b987d15894f96c02be8b0262a1e098758-merged.mount: Deactivated successfully.
Jan 29 09:12:27 compute-0 podman[81555]: 2026-01-29 09:12:27.471458021 +0000 UTC m=+1.188250737 container remove d50ad02ef604cee7459907a8be239de20484a7b7c5c9bb95130d077b7bf51fcd (image=quay.io/ceph/ceph:v20, name=great_yalow, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:12:27 compute-0 systemd[1]: libpod-conmon-d50ad02ef604cee7459907a8be239de20484a7b7c5c9bb95130d077b7bf51fcd.scope: Deactivated successfully.
Jan 29 09:12:27 compute-0 sudo[81750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 rm-daemon --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --name mgr.compute-0.azpxyn --force --tcp-ports 8765
Jan 29 09:12:27 compute-0 sudo[81750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:27 compute-0 sudo[81517]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:27 compute-0 ceph-mgr[80674]: mgr[py] Loading python module 'rook'
Jan 29 09:12:27 compute-0 ceph-mon[75183]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:12:27 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:27 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:27 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:27 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:27 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:27 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:12:27 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:27 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:27 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:27 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:27 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:27 compute-0 sudo[81808]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmlwuugicmiiszftgbzhiephnlbvqnpw ; /usr/bin/python3'
Jan 29 09:12:27 compute-0 sudo[81808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:12:27 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.azpxyn for 3fdce3ca-565d-5459-88e8-1ffe58b48437...
Jan 29 09:12:27 compute-0 python3[81816]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:12:27 compute-0 podman[81849]: 2026-01-29 09:12:27.938985602 +0000 UTC m=+0.077880183 container died 5f9301762b7e24928d43611d6e46758027d464f09d64c69ac56a20291718b746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-azpxyn, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:12:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e07aaf68d0bf68785636ff2f03f9379bf4fce370e68ebffe7ebbb4c5b209cefb-merged.mount: Deactivated successfully.
Jan 29 09:12:27 compute-0 podman[81860]: 2026-01-29 09:12:27.965558329 +0000 UTC m=+0.057673088 container create 79d55477beedb64087600d9369a18ceb8a01e9a63191f2cadcced7852c4ff489 (image=quay.io/ceph/ceph:v20, name=boring_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:12:28 compute-0 podman[81849]: 2026-01-29 09:12:28.006949777 +0000 UTC m=+0.145844368 container remove 5f9301762b7e24928d43611d6e46758027d464f09d64c69ac56a20291718b746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-azpxyn, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 09:12:28 compute-0 bash[81849]: ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-azpxyn
Jan 29 09:12:28 compute-0 systemd[1]: ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437@mgr.compute-0.azpxyn.service: Main process exited, code=exited, status=143/n/a
Jan 29 09:12:28 compute-0 systemd[1]: Started libpod-conmon-79d55477beedb64087600d9369a18ceb8a01e9a63191f2cadcced7852c4ff489.scope.
Jan 29 09:12:28 compute-0 podman[81860]: 2026-01-29 09:12:27.941165161 +0000 UTC m=+0.033279940 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:28 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba3af9b71775e06bd78aa2e2db5fceba682f2dc9d51b9864fa200c209f543712/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba3af9b71775e06bd78aa2e2db5fceba682f2dc9d51b9864fa200c209f543712/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba3af9b71775e06bd78aa2e2db5fceba682f2dc9d51b9864fa200c209f543712/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:28 compute-0 podman[81860]: 2026-01-29 09:12:28.067295126 +0000 UTC m=+0.159409905 container init 79d55477beedb64087600d9369a18ceb8a01e9a63191f2cadcced7852c4ff489 (image=quay.io/ceph/ceph:v20, name=boring_albattani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:12:28 compute-0 podman[81860]: 2026-01-29 09:12:28.074477529 +0000 UTC m=+0.166592288 container start 79d55477beedb64087600d9369a18ceb8a01e9a63191f2cadcced7852c4ff489 (image=quay.io/ceph/ceph:v20, name=boring_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:12:28 compute-0 podman[81860]: 2026-01-29 09:12:28.087561553 +0000 UTC m=+0.179676312 container attach 79d55477beedb64087600d9369a18ceb8a01e9a63191f2cadcced7852c4ff489 (image=quay.io/ceph/ceph:v20, name=boring_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 09:12:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:12:28 compute-0 systemd[1]: ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437@mgr.compute-0.azpxyn.service: Failed with result 'exit-code'.
Jan 29 09:12:28 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.azpxyn for 3fdce3ca-565d-5459-88e8-1ffe58b48437.
Jan 29 09:12:28 compute-0 systemd[1]: ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437@mgr.compute-0.azpxyn.service: Consumed 6.934s CPU time, 344.8M memory peak, read 0B from disk, written 178.5K to disk.
Jan 29 09:12:28 compute-0 systemd[1]: Reloading.
Jan 29 09:12:28 compute-0 systemd-rc-local-generator[81971]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:12:28 compute-0 systemd-sysv-generator[81976]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:12:28 compute-0 sudo[81750]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:28 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.azpxyn
Jan 29 09:12:28 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.azpxyn
Jan 29 09:12:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.azpxyn"} v 0)
Jan 29 09:12:28 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.azpxyn"} : dispatch
Jan 29 09:12:28 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.azpxyn"}]': finished
Jan 29 09:12:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 29 09:12:28 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:28 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:28 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev dd7315e4-28d8-4a1b-9e38-4fd71a2cce67 (Updating mgr deployment (-1 -> 1))
Jan 29 09:12:28 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event dd7315e4-28d8-4a1b-9e38-4fd71a2cce67 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Jan 29 09:12:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 29 09:12:28 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:28 compute-0 sudo[81988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:12:28 compute-0 sudo[81988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:28 compute-0 sudo[81988]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 29 09:12:28 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1632145906' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 29 09:12:28 compute-0 boring_albattani[81897]: 
Jan 29 09:12:28 compute-0 boring_albattani[81897]: {"fsid":"3fdce3ca-565d-5459-88e8-1ffe58b48437","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":50,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-29T09:11:36:099108+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-29T09:11:36.103347+0000","services":{}},"progress_events":{}}
Jan 29 09:12:28 compute-0 systemd[1]: libpod-79d55477beedb64087600d9369a18ceb8a01e9a63191f2cadcced7852c4ff489.scope: Deactivated successfully.
Jan 29 09:12:28 compute-0 podman[81860]: 2026-01-29 09:12:28.659637725 +0000 UTC m=+0.751752494 container died 79d55477beedb64087600d9369a18ceb8a01e9a63191f2cadcced7852c4ff489 (image=quay.io/ceph/ceph:v20, name=boring_albattani, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 29 09:12:28 compute-0 sudo[82013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:12:28 compute-0 sudo[82013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:28 compute-0 sudo[82013]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:28 compute-0 ceph-mon[75183]: Added host compute-0
Jan 29 09:12:28 compute-0 ceph-mon[75183]: Saving service mon spec with placement compute-0
Jan 29 09:12:28 compute-0 ceph-mon[75183]: Saving service mgr spec with placement compute-0
Jan 29 09:12:28 compute-0 ceph-mon[75183]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 29 09:12:28 compute-0 ceph-mon[75183]: Saving service osd.default_drive_group spec with placement compute-0
Jan 29 09:12:28 compute-0 ceph-mon[75183]: Removing daemon mgr.compute-0.azpxyn from compute-0 -- ports [8765]
Jan 29 09:12:28 compute-0 ceph-mon[75183]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:28 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.azpxyn"} : dispatch
Jan 29 09:12:28 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.azpxyn"}]': finished
Jan 29 09:12:28 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:28 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:28 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1632145906' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 29 09:12:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba3af9b71775e06bd78aa2e2db5fceba682f2dc9d51b9864fa200c209f543712-merged.mount: Deactivated successfully.
Jan 29 09:12:28 compute-0 podman[81860]: 2026-01-29 09:12:28.703204801 +0000 UTC m=+0.795319560 container remove 79d55477beedb64087600d9369a18ceb8a01e9a63191f2cadcced7852c4ff489 (image=quay.io/ceph/ceph:v20, name=boring_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 29 09:12:28 compute-0 systemd[1]: libpod-conmon-79d55477beedb64087600d9369a18ceb8a01e9a63191f2cadcced7852c4ff489.scope: Deactivated successfully.
Jan 29 09:12:28 compute-0 sudo[81808]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:28 compute-0 sudo[82047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 29 09:12:28 compute-0 sudo[82047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:29 compute-0 podman[82120]: 2026-01-29 09:12:29.122058038 +0000 UTC m=+0.051599604 container exec 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:12:29 compute-0 podman[82120]: 2026-01-29 09:12:29.222737366 +0000 UTC m=+0.152278902 container exec_died 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 29 09:12:29 compute-0 sudo[82047]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:12:29 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:12:29 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:12:29 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:12:29 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:12:29 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:12:29 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:12:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:12:29 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:12:29 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:12:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:12:29 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:12:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:12:29 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:29 compute-0 sudo[82215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:12:29 compute-0 sudo[82215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:29 compute-0 sudo[82215]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:29 compute-0 sudo[82240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:12:29 compute-0 sudo[82240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:29 compute-0 ceph-mon[75183]: Removing key for mgr.compute-0.azpxyn
Jan 29 09:12:29 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:29 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:29 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:29 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:29 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:29 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:12:29 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:29 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:12:29 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:12:29 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:29 compute-0 podman[82277]: 2026-01-29 09:12:29.943829501 +0000 UTC m=+0.041346667 container create 6d208863c4e604510cb1a92be4234130c993426b176e971c1460b870c17e3d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tesla, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 29 09:12:29 compute-0 systemd[1]: Started libpod-conmon-6d208863c4e604510cb1a92be4234130c993426b176e971c1460b870c17e3d6c.scope.
Jan 29 09:12:30 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:30 compute-0 podman[82277]: 2026-01-29 09:12:29.925283601 +0000 UTC m=+0.022800797 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:12:30 compute-0 podman[82277]: 2026-01-29 09:12:30.031812306 +0000 UTC m=+0.129329492 container init 6d208863c4e604510cb1a92be4234130c993426b176e971c1460b870c17e3d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Jan 29 09:12:30 compute-0 podman[82277]: 2026-01-29 09:12:30.03824935 +0000 UTC m=+0.135766516 container start 6d208863c4e604510cb1a92be4234130c993426b176e971c1460b870c17e3d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:12:30 compute-0 vigilant_tesla[82294]: 167 167
Jan 29 09:12:30 compute-0 systemd[1]: libpod-6d208863c4e604510cb1a92be4234130c993426b176e971c1460b870c17e3d6c.scope: Deactivated successfully.
Jan 29 09:12:30 compute-0 podman[82277]: 2026-01-29 09:12:30.099887904 +0000 UTC m=+0.197405080 container attach 6d208863c4e604510cb1a92be4234130c993426b176e971c1460b870c17e3d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tesla, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:12:30 compute-0 podman[82277]: 2026-01-29 09:12:30.100604393 +0000 UTC m=+0.198121559 container died 6d208863c4e604510cb1a92be4234130c993426b176e971c1460b870c17e3d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 09:12:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f2905a38be2f1001e75a9eb5fbe25cb6749571fe83cacd2adee6afb349952a9-merged.mount: Deactivated successfully.
Jan 29 09:12:30 compute-0 podman[82277]: 2026-01-29 09:12:30.139825052 +0000 UTC m=+0.237342218 container remove 6d208863c4e604510cb1a92be4234130c993426b176e971c1460b870c17e3d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 09:12:30 compute-0 systemd[1]: libpod-conmon-6d208863c4e604510cb1a92be4234130c993426b176e971c1460b870c17e3d6c.scope: Deactivated successfully.
Jan 29 09:12:30 compute-0 podman[82320]: 2026-01-29 09:12:30.271105826 +0000 UTC m=+0.043512546 container create 14eca5df279f28aa31d7af3aaca0438a4440bcac2bc2179e7a1d7e85d6053dad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_bell, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 29 09:12:30 compute-0 systemd[1]: Started libpod-conmon-14eca5df279f28aa31d7af3aaca0438a4440bcac2bc2179e7a1d7e85d6053dad.scope.
Jan 29 09:12:30 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0633e03cf4a8a3111eeef17529a6e054d53d9af1a2b81502e5340cf6aa3951c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:30 compute-0 podman[82320]: 2026-01-29 09:12:30.251970989 +0000 UTC m=+0.024377719 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0633e03cf4a8a3111eeef17529a6e054d53d9af1a2b81502e5340cf6aa3951c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0633e03cf4a8a3111eeef17529a6e054d53d9af1a2b81502e5340cf6aa3951c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0633e03cf4a8a3111eeef17529a6e054d53d9af1a2b81502e5340cf6aa3951c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0633e03cf4a8a3111eeef17529a6e054d53d9af1a2b81502e5340cf6aa3951c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:30 compute-0 podman[82320]: 2026-01-29 09:12:30.362838702 +0000 UTC m=+0.135245412 container init 14eca5df279f28aa31d7af3aaca0438a4440bcac2bc2179e7a1d7e85d6053dad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_bell, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 29 09:12:30 compute-0 podman[82320]: 2026-01-29 09:12:30.368279289 +0000 UTC m=+0.140685999 container start 14eca5df279f28aa31d7af3aaca0438a4440bcac2bc2179e7a1d7e85d6053dad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_bell, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:12:30 compute-0 podman[82320]: 2026-01-29 09:12:30.374463816 +0000 UTC m=+0.146870556 container attach 14eca5df279f28aa31d7af3aaca0438a4440bcac2bc2179e7a1d7e85d6053dad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_bell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:12:30 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:30 compute-0 ceph-mon[75183]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:30 compute-0 suspicious_bell[82337]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:12:30 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:31 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:31 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f2ccc677-8576-4fc5-9e3f-60956ceb21b0
Jan 29 09:12:31 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0"} v 0)
Jan 29 09:12:31 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4156979148' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0"} : dispatch
Jan 29 09:12:31 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 29 09:12:31 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 09:12:31 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4156979148' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0"}]': finished
Jan 29 09:12:31 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 29 09:12:31 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 29 09:12:31 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:12:31 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:31 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:12:31 compute-0 ceph-mgr[75473]: [progress INFO root] Writing back 3 completed events
Jan 29 09:12:31 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 29 09:12:31 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:31 compute-0 suspicious_bell[82337]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 29 09:12:31 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 29 09:12:31 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 29 09:12:31 compute-0 lvm[82429]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:12:31 compute-0 lvm[82429]: VG ceph_vg0 finished
Jan 29 09:12:31 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:31 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 29 09:12:31 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/4156979148' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0"} : dispatch
Jan 29 09:12:31 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/4156979148' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0"}]': finished
Jan 29 09:12:31 compute-0 ceph-mon[75183]: osdmap e4: 1 total, 0 up, 1 in
Jan 29 09:12:31 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:31 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 29 09:12:32 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1408567982' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 29 09:12:32 compute-0 suspicious_bell[82337]:  stderr: got monmap epoch 1
Jan 29 09:12:32 compute-0 suspicious_bell[82337]: --> Creating keyring file for osd.0
Jan 29 09:12:32 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 29 09:12:32 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 29 09:12:32 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid f2ccc677-8576-4fc5-9e3f-60956ceb21b0 --setuser ceph --setgroup ceph
Jan 29 09:12:32 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:32 compute-0 ceph-mon[75183]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 29 09:12:32 compute-0 ceph-mon[75183]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 29 09:12:32 compute-0 ceph-mon[75183]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:32 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1408567982' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 29 09:12:33 compute-0 suspicious_bell[82337]:  stderr: 2026-01-29T09:12:32.267+0000 7fef2ce018c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Jan 29 09:12:33 compute-0 suspicious_bell[82337]:  stderr: 2026-01-29T09:12:32.296+0000 7fef2ce018c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 29 09:12:33 compute-0 suspicious_bell[82337]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 29 09:12:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:12:33 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 29 09:12:33 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 29 09:12:33 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:33 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:33 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 29 09:12:33 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 29 09:12:33 compute-0 suspicious_bell[82337]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 29 09:12:33 compute-0 suspicious_bell[82337]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 29 09:12:33 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:33 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:33 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 173bb34b-fc2f-4ae0-a6ce-59e1b64aaace
Jan 29 09:12:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace"} v 0)
Jan 29 09:12:33 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1897405701' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace"} : dispatch
Jan 29 09:12:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 29 09:12:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 09:12:33 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1897405701' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace"}]': finished
Jan 29 09:12:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 29 09:12:33 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 29 09:12:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:12:33 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:12:33 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:12:33 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:12:33 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:12:33 compute-0 ceph-mon[75183]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 29 09:12:33 compute-0 ceph-mon[75183]: Cluster is now healthy
Jan 29 09:12:33 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1897405701' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace"} : dispatch
Jan 29 09:12:33 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1897405701' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace"}]': finished
Jan 29 09:12:33 compute-0 ceph-mon[75183]: osdmap e5: 2 total, 0 up, 2 in
Jan 29 09:12:33 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:33 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:12:33 compute-0 lvm[83377]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:12:33 compute-0 lvm[83377]: VG ceph_vg1 finished
Jan 29 09:12:33 compute-0 suspicious_bell[82337]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 29 09:12:33 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Jan 29 09:12:33 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 29 09:12:33 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:33 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 29 09:12:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:34 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 29 09:12:34 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/185099383' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 29 09:12:34 compute-0 suspicious_bell[82337]:  stderr: got monmap epoch 1
Jan 29 09:12:34 compute-0 suspicious_bell[82337]: --> Creating keyring file for osd.1
Jan 29 09:12:34 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 29 09:12:34 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 29 09:12:34 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 173bb34b-fc2f-4ae0-a6ce-59e1b64aaace --setuser ceph --setgroup ceph
Jan 29 09:12:34 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:34 compute-0 ceph-mon[75183]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:34 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/185099383' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 29 09:12:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:36 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:36 compute-0 suspicious_bell[82337]:  stderr: 2026-01-29T09:12:34.574+0000 7f891b0f28c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Jan 29 09:12:36 compute-0 suspicious_bell[82337]:  stderr: 2026-01-29T09:12:34.593+0000 7f891b0f28c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 29 09:12:36 compute-0 suspicious_bell[82337]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Jan 29 09:12:36 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 29 09:12:36 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 29 09:12:36 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:36 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:36 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 29 09:12:36 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 29 09:12:36 compute-0 suspicious_bell[82337]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 29 09:12:36 compute-0 suspicious_bell[82337]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Jan 29 09:12:36 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:36 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:36 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 9a078cd8-4bd4-40a2-98a3-c1163db42997
Jan 29 09:12:37 compute-0 ceph-mon[75183]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "9a078cd8-4bd4-40a2-98a3-c1163db42997"} v 0)
Jan 29 09:12:37 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1752695742' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "9a078cd8-4bd4-40a2-98a3-c1163db42997"} : dispatch
Jan 29 09:12:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 29 09:12:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 09:12:37 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1752695742' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9a078cd8-4bd4-40a2-98a3-c1163db42997"}]': finished
Jan 29 09:12:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Jan 29 09:12:37 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Jan 29 09:12:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:12:37 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:12:37 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:12:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 09:12:37 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:12:37 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:12:37 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:12:37 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 09:12:37 compute-0 lvm[84328]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:12:37 compute-0 lvm[84328]: VG ceph_vg2 finished
Jan 29 09:12:37 compute-0 suspicious_bell[82337]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Jan 29 09:12:37 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Jan 29 09:12:37 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 29 09:12:37 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 29 09:12:37 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Jan 29 09:12:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 29 09:12:37 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2572320140' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 29 09:12:37 compute-0 suspicious_bell[82337]:  stderr: got monmap epoch 1
Jan 29 09:12:37 compute-0 suspicious_bell[82337]: --> Creating keyring file for osd.2
Jan 29 09:12:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:37 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Jan 29 09:12:37 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Jan 29 09:12:37 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 9a078cd8-4bd4-40a2-98a3-c1163db42997 --setuser ceph --setgroup ceph
Jan 29 09:12:38 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1752695742' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "9a078cd8-4bd4-40a2-98a3-c1163db42997"} : dispatch
Jan 29 09:12:38 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1752695742' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9a078cd8-4bd4-40a2-98a3-c1163db42997"}]': finished
Jan 29 09:12:38 compute-0 ceph-mon[75183]: osdmap e6: 3 total, 0 up, 3 in
Jan 29 09:12:38 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:38 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:12:38 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:12:38 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2572320140' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 29 09:12:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:12:38 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:39 compute-0 suspicious_bell[82337]:  stderr: 2026-01-29T09:12:38.016+0000 7f29a70eb8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Jan 29 09:12:39 compute-0 suspicious_bell[82337]:  stderr: 2026-01-29T09:12:38.039+0000 7f29a70eb8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Jan 29 09:12:39 compute-0 suspicious_bell[82337]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Jan 29 09:12:39 compute-0 ceph-mon[75183]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:39 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 29 09:12:39 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 29 09:12:39 compute-0 suspicious_bell[82337]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 29 09:12:39 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 29 09:12:39 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 29 09:12:39 compute-0 suspicious_bell[82337]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 29 09:12:39 compute-0 suspicious_bell[82337]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 29 09:12:39 compute-0 suspicious_bell[82337]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Jan 29 09:12:39 compute-0 systemd[1]: libpod-14eca5df279f28aa31d7af3aaca0438a4440bcac2bc2179e7a1d7e85d6053dad.scope: Deactivated successfully.
Jan 29 09:12:39 compute-0 systemd[1]: libpod-14eca5df279f28aa31d7af3aaca0438a4440bcac2bc2179e7a1d7e85d6053dad.scope: Consumed 5.729s CPU time.
Jan 29 09:12:39 compute-0 podman[85248]: 2026-01-29 09:12:39.225719405 +0000 UTC m=+0.031207053 container died 14eca5df279f28aa31d7af3aaca0438a4440bcac2bc2179e7a1d7e85d6053dad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_bell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:12:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0633e03cf4a8a3111eeef17529a6e054d53d9af1a2b81502e5340cf6aa3951c-merged.mount: Deactivated successfully.
Jan 29 09:12:39 compute-0 podman[85248]: 2026-01-29 09:12:39.306846925 +0000 UTC m=+0.112334553 container remove 14eca5df279f28aa31d7af3aaca0438a4440bcac2bc2179e7a1d7e85d6053dad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_bell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 29 09:12:39 compute-0 systemd[1]: libpod-conmon-14eca5df279f28aa31d7af3aaca0438a4440bcac2bc2179e7a1d7e85d6053dad.scope: Deactivated successfully.
Jan 29 09:12:39 compute-0 sudo[82240]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:39 compute-0 sudo[85263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:12:39 compute-0 sudo[85263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:39 compute-0 sudo[85263]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:39 compute-0 sudo[85288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:12:39 compute-0 sudo[85288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:39 compute-0 podman[85326]: 2026-01-29 09:12:39.752211728 +0000 UTC m=+0.051284996 container create d424dd439dde71c9f297c8d54027bfd82e03c93a97d31d193079bf2dc2fb4bed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 29 09:12:39 compute-0 systemd[1]: Started libpod-conmon-d424dd439dde71c9f297c8d54027bfd82e03c93a97d31d193079bf2dc2fb4bed.scope.
Jan 29 09:12:39 compute-0 podman[85326]: 2026-01-29 09:12:39.719647079 +0000 UTC m=+0.018720357 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:12:39 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:39 compute-0 podman[85326]: 2026-01-29 09:12:39.846054611 +0000 UTC m=+0.145127889 container init d424dd439dde71c9f297c8d54027bfd82e03c93a97d31d193079bf2dc2fb4bed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:12:39 compute-0 podman[85326]: 2026-01-29 09:12:39.853639996 +0000 UTC m=+0.152713254 container start d424dd439dde71c9f297c8d54027bfd82e03c93a97d31d193079bf2dc2fb4bed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:12:39 compute-0 thirsty_jones[85343]: 167 167
Jan 29 09:12:39 compute-0 systemd[1]: libpod-d424dd439dde71c9f297c8d54027bfd82e03c93a97d31d193079bf2dc2fb4bed.scope: Deactivated successfully.
Jan 29 09:12:39 compute-0 podman[85326]: 2026-01-29 09:12:39.862607758 +0000 UTC m=+0.161681046 container attach d424dd439dde71c9f297c8d54027bfd82e03c93a97d31d193079bf2dc2fb4bed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jones, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:12:39 compute-0 podman[85326]: 2026-01-29 09:12:39.8649302 +0000 UTC m=+0.164003458 container died d424dd439dde71c9f297c8d54027bfd82e03c93a97d31d193079bf2dc2fb4bed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jones, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:12:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ccd6931558b8e988e481e1d2fbc7927d157e1ab648c29799842d90a8358aea4-merged.mount: Deactivated successfully.
Jan 29 09:12:39 compute-0 podman[85326]: 2026-01-29 09:12:39.933632605 +0000 UTC m=+0.232705873 container remove d424dd439dde71c9f297c8d54027bfd82e03c93a97d31d193079bf2dc2fb4bed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jones, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 29 09:12:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:39 compute-0 systemd[1]: libpod-conmon-d424dd439dde71c9f297c8d54027bfd82e03c93a97d31d193079bf2dc2fb4bed.scope: Deactivated successfully.
Jan 29 09:12:40 compute-0 podman[85366]: 2026-01-29 09:12:40.065205797 +0000 UTC m=+0.045737086 container create 3907c0f45bbeee8fe914ea23fc74f8d273fc3b00837433952e85d0b17701582d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:12:40 compute-0 systemd[1]: Started libpod-conmon-3907c0f45bbeee8fe914ea23fc74f8d273fc3b00837433952e85d0b17701582d.scope.
Jan 29 09:12:40 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc32b5f0bf322349eeebcee6d1bf8f76f69ff97ca3617e4651a3ece5c277ed5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:40 compute-0 podman[85366]: 2026-01-29 09:12:40.042146744 +0000 UTC m=+0.022678063 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc32b5f0bf322349eeebcee6d1bf8f76f69ff97ca3617e4651a3ece5c277ed5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc32b5f0bf322349eeebcee6d1bf8f76f69ff97ca3617e4651a3ece5c277ed5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc32b5f0bf322349eeebcee6d1bf8f76f69ff97ca3617e4651a3ece5c277ed5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:40 compute-0 podman[85366]: 2026-01-29 09:12:40.158486885 +0000 UTC m=+0.139018204 container init 3907c0f45bbeee8fe914ea23fc74f8d273fc3b00837433952e85d0b17701582d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:12:40 compute-0 podman[85366]: 2026-01-29 09:12:40.164113567 +0000 UTC m=+0.144644856 container start 3907c0f45bbeee8fe914ea23fc74f8d273fc3b00837433952e85d0b17701582d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:12:40 compute-0 podman[85366]: 2026-01-29 09:12:40.171488266 +0000 UTC m=+0.152019585 container attach 3907c0f45bbeee8fe914ea23fc74f8d273fc3b00837433952e85d0b17701582d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_elbakyan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]: {
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:     "0": [
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:         {
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "devices": [
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "/dev/loop3"
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             ],
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "lv_name": "ceph_lv0",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "lv_size": "21470642176",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "name": "ceph_lv0",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "tags": {
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.cluster_name": "ceph",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.crush_device_class": "",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.encrypted": "0",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.objectstore": "bluestore",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.osd_id": "0",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.type": "block",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.vdo": "0",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.with_tpm": "0"
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             },
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "type": "block",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "vg_name": "ceph_vg0"
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:         }
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:     ],
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:     "1": [
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:         {
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "devices": [
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "/dev/loop4"
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             ],
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "lv_name": "ceph_lv1",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "lv_size": "21470642176",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "name": "ceph_lv1",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "tags": {
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.cluster_name": "ceph",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.crush_device_class": "",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.encrypted": "0",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.objectstore": "bluestore",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.osd_id": "1",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.type": "block",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.vdo": "0",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.with_tpm": "0"
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             },
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "type": "block",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "vg_name": "ceph_vg1"
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:         }
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:     ],
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:     "2": [
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:         {
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "devices": [
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "/dev/loop5"
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             ],
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "lv_name": "ceph_lv2",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "lv_size": "21470642176",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "name": "ceph_lv2",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "tags": {
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.cluster_name": "ceph",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.crush_device_class": "",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.encrypted": "0",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.objectstore": "bluestore",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.osd_id": "2",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.type": "block",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.vdo": "0",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:                 "ceph.with_tpm": "0"
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             },
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "type": "block",
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:             "vg_name": "ceph_vg2"
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:         }
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]:     ]
Jan 29 09:12:40 compute-0 laughing_elbakyan[85383]: }
Jan 29 09:12:40 compute-0 systemd[1]: libpod-3907c0f45bbeee8fe914ea23fc74f8d273fc3b00837433952e85d0b17701582d.scope: Deactivated successfully.
Jan 29 09:12:40 compute-0 podman[85366]: 2026-01-29 09:12:40.495024419 +0000 UTC m=+0.475555708 container died 3907c0f45bbeee8fe914ea23fc74f8d273fc3b00837433952e85d0b17701582d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 29 09:12:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddc32b5f0bf322349eeebcee6d1bf8f76f69ff97ca3617e4651a3ece5c277ed5-merged.mount: Deactivated successfully.
Jan 29 09:12:40 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:40 compute-0 podman[85366]: 2026-01-29 09:12:40.578578675 +0000 UTC m=+0.559109964 container remove 3907c0f45bbeee8fe914ea23fc74f8d273fc3b00837433952e85d0b17701582d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_elbakyan, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 29 09:12:40 compute-0 systemd[1]: libpod-conmon-3907c0f45bbeee8fe914ea23fc74f8d273fc3b00837433952e85d0b17701582d.scope: Deactivated successfully.
Jan 29 09:12:40 compute-0 sudo[85288]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:40 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 29 09:12:40 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 29 09:12:40 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:12:40 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:40 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 29 09:12:40 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 29 09:12:40 compute-0 sudo[85407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:12:40 compute-0 sudo[85407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:40 compute-0 sudo[85407]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:40 compute-0 sudo[85432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:12:40 compute-0 sudo[85432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:41 compute-0 ceph-mon[75183]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:41 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 29 09:12:41 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:41 compute-0 podman[85497]: 2026-01-29 09:12:41.155638472 +0000 UTC m=+0.097033040 container create bcf68806d53d9d0d7e5fb2707abcdd2a5fcc7b727ec272cf55af902102898772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_yonath, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 09:12:41 compute-0 podman[85497]: 2026-01-29 09:12:41.081602494 +0000 UTC m=+0.022997092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:12:41 compute-0 systemd[1]: Started libpod-conmon-bcf68806d53d9d0d7e5fb2707abcdd2a5fcc7b727ec272cf55af902102898772.scope.
Jan 29 09:12:41 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:41 compute-0 podman[85497]: 2026-01-29 09:12:41.271988443 +0000 UTC m=+0.213383031 container init bcf68806d53d9d0d7e5fb2707abcdd2a5fcc7b727ec272cf55af902102898772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 29 09:12:41 compute-0 podman[85497]: 2026-01-29 09:12:41.280332018 +0000 UTC m=+0.221726586 container start bcf68806d53d9d0d7e5fb2707abcdd2a5fcc7b727ec272cf55af902102898772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_yonath, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:12:41 compute-0 musing_yonath[85513]: 167 167
Jan 29 09:12:41 compute-0 systemd[1]: libpod-bcf68806d53d9d0d7e5fb2707abcdd2a5fcc7b727ec272cf55af902102898772.scope: Deactivated successfully.
Jan 29 09:12:41 compute-0 podman[85497]: 2026-01-29 09:12:41.30450067 +0000 UTC m=+0.245895238 container attach bcf68806d53d9d0d7e5fb2707abcdd2a5fcc7b727ec272cf55af902102898772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_yonath, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:12:41 compute-0 podman[85497]: 2026-01-29 09:12:41.306169996 +0000 UTC m=+0.247564574 container died bcf68806d53d9d0d7e5fb2707abcdd2a5fcc7b727ec272cf55af902102898772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 29 09:12:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5796f34e916c8c276a70b292d092650d64757f4628fe2087cacb203f1c1c8c38-merged.mount: Deactivated successfully.
Jan 29 09:12:41 compute-0 podman[85497]: 2026-01-29 09:12:41.683081409 +0000 UTC m=+0.624475987 container remove bcf68806d53d9d0d7e5fb2707abcdd2a5fcc7b727ec272cf55af902102898772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_yonath, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 29 09:12:41 compute-0 systemd[1]: libpod-conmon-bcf68806d53d9d0d7e5fb2707abcdd2a5fcc7b727ec272cf55af902102898772.scope: Deactivated successfully.
Jan 29 09:12:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:42 compute-0 podman[85545]: 2026-01-29 09:12:41.993951211 +0000 UTC m=+0.020989638 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:12:42 compute-0 podman[85545]: 2026-01-29 09:12:42.174831544 +0000 UTC m=+0.201869951 container create f625c600280046ec19453222f1426a08ace0eef802123f57e7a5798fbb3e105f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate-test, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:12:42 compute-0 ceph-mon[75183]: Deploying daemon osd.0 on compute-0
Jan 29 09:12:42 compute-0 systemd[1]: Started libpod-conmon-f625c600280046ec19453222f1426a08ace0eef802123f57e7a5798fbb3e105f.scope.
Jan 29 09:12:42 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0846af5f92a27cdae44523c602ee3e37626e106fb7888c6fec4ef0d49c1079e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0846af5f92a27cdae44523c602ee3e37626e106fb7888c6fec4ef0d49c1079e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0846af5f92a27cdae44523c602ee3e37626e106fb7888c6fec4ef0d49c1079e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0846af5f92a27cdae44523c602ee3e37626e106fb7888c6fec4ef0d49c1079e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0846af5f92a27cdae44523c602ee3e37626e106fb7888c6fec4ef0d49c1079e2/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:42 compute-0 podman[85545]: 2026-01-29 09:12:42.351542664 +0000 UTC m=+0.378581101 container init f625c600280046ec19453222f1426a08ace0eef802123f57e7a5798fbb3e105f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:12:42 compute-0 podman[85545]: 2026-01-29 09:12:42.358647526 +0000 UTC m=+0.385685933 container start f625c600280046ec19453222f1426a08ace0eef802123f57e7a5798fbb3e105f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 29 09:12:42 compute-0 podman[85545]: 2026-01-29 09:12:42.397105064 +0000 UTC m=+0.424143501 container attach f625c600280046ec19453222f1426a08ace0eef802123f57e7a5798fbb3e105f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate-test, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:12:42 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate-test[85561]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 29 09:12:42 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate-test[85561]:                             [--no-systemd] [--no-tmpfs]
Jan 29 09:12:42 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate-test[85561]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 29 09:12:42 compute-0 systemd[1]: libpod-f625c600280046ec19453222f1426a08ace0eef802123f57e7a5798fbb3e105f.scope: Deactivated successfully.
Jan 29 09:12:42 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:42 compute-0 podman[85545]: 2026-01-29 09:12:42.545356826 +0000 UTC m=+0.572395233 container died f625c600280046ec19453222f1426a08ace0eef802123f57e7a5798fbb3e105f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate-test, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 29 09:12:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0846af5f92a27cdae44523c602ee3e37626e106fb7888c6fec4ef0d49c1079e2-merged.mount: Deactivated successfully.
Jan 29 09:12:42 compute-0 podman[85545]: 2026-01-29 09:12:42.930346138 +0000 UTC m=+0.957384545 container remove f625c600280046ec19453222f1426a08ace0eef802123f57e7a5798fbb3e105f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate-test, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:12:43 compute-0 systemd[1]: libpod-conmon-f625c600280046ec19453222f1426a08ace0eef802123f57e7a5798fbb3e105f.scope: Deactivated successfully.
Jan 29 09:12:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:12:43 compute-0 ceph-mon[75183]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:43 compute-0 systemd[1]: Reloading.
Jan 29 09:12:43 compute-0 systemd-rc-local-generator[85621]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:12:43 compute-0 systemd-sysv-generator[85624]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:12:43 compute-0 systemd[1]: Reloading.
Jan 29 09:12:43 compute-0 systemd-sysv-generator[85662]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:12:43 compute-0 systemd-rc-local-generator[85658]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:12:43 compute-0 systemd[1]: Starting Ceph osd.0 for 3fdce3ca-565d-5459-88e8-1ffe58b48437...
Jan 29 09:12:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:44 compute-0 podman[85718]: 2026-01-29 09:12:44.074516294 +0000 UTC m=+0.108393467 container create 919386d7f38c4871f6a73b3b0143e68386457c0d542bd77f133e89f63f052496 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 29 09:12:44 compute-0 podman[85718]: 2026-01-29 09:12:43.989762676 +0000 UTC m=+0.023639869 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:12:44 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf36be0ce6b276c7cbbbec0181a798874997b77f4cae5edaf2e9e9f8d50b0676/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf36be0ce6b276c7cbbbec0181a798874997b77f4cae5edaf2e9e9f8d50b0676/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf36be0ce6b276c7cbbbec0181a798874997b77f4cae5edaf2e9e9f8d50b0676/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf36be0ce6b276c7cbbbec0181a798874997b77f4cae5edaf2e9e9f8d50b0676/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf36be0ce6b276c7cbbbec0181a798874997b77f4cae5edaf2e9e9f8d50b0676/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:44 compute-0 podman[85718]: 2026-01-29 09:12:44.272509599 +0000 UTC m=+0.306386822 container init 919386d7f38c4871f6a73b3b0143e68386457c0d542bd77f133e89f63f052496 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 29 09:12:44 compute-0 podman[85718]: 2026-01-29 09:12:44.279126187 +0000 UTC m=+0.313003360 container start 919386d7f38c4871f6a73b3b0143e68386457c0d542bd77f133e89f63f052496 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Jan 29 09:12:44 compute-0 ceph-mon[75183]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:44 compute-0 podman[85718]: 2026-01-29 09:12:44.415069047 +0000 UTC m=+0.448946230 container attach 919386d7f38c4871f6a73b3b0143e68386457c0d542bd77f133e89f63f052496 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:12:44 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate[85734]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:44 compute-0 bash[85718]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:44 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate[85734]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:44 compute-0 bash[85718]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:44 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:45 compute-0 lvm[85820]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:12:45 compute-0 lvm[85817]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:12:45 compute-0 lvm[85820]: VG ceph_vg1 finished
Jan 29 09:12:45 compute-0 lvm[85817]: VG ceph_vg0 finished
Jan 29 09:12:45 compute-0 lvm[85822]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:12:45 compute-0 lvm[85822]: VG ceph_vg2 finished
Jan 29 09:12:45 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate[85734]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 29 09:12:45 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate[85734]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:45 compute-0 bash[85718]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 29 09:12:45 compute-0 bash[85718]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:45 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate[85734]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:45 compute-0 bash[85718]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:45 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate[85734]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 29 09:12:45 compute-0 bash[85718]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 29 09:12:45 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate[85734]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 29 09:12:45 compute-0 bash[85718]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 29 09:12:45 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate[85734]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:45 compute-0 bash[85718]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:45 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate[85734]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:45 compute-0 bash[85718]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:45 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate[85734]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 29 09:12:45 compute-0 bash[85718]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 29 09:12:45 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate[85734]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 29 09:12:45 compute-0 bash[85718]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 29 09:12:45 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate[85734]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 29 09:12:45 compute-0 bash[85718]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 29 09:12:45 compute-0 systemd[1]: libpod-919386d7f38c4871f6a73b3b0143e68386457c0d542bd77f133e89f63f052496.scope: Deactivated successfully.
Jan 29 09:12:45 compute-0 systemd[1]: libpod-919386d7f38c4871f6a73b3b0143e68386457c0d542bd77f133e89f63f052496.scope: Consumed 1.444s CPU time.
Jan 29 09:12:45 compute-0 podman[85718]: 2026-01-29 09:12:45.389823148 +0000 UTC m=+1.423700321 container died 919386d7f38c4871f6a73b3b0143e68386457c0d542bd77f133e89f63f052496 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 29 09:12:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf36be0ce6b276c7cbbbec0181a798874997b77f4cae5edaf2e9e9f8d50b0676-merged.mount: Deactivated successfully.
Jan 29 09:12:45 compute-0 podman[85718]: 2026-01-29 09:12:45.723253549 +0000 UTC m=+1.757130722 container remove 919386d7f38c4871f6a73b3b0143e68386457c0d542bd77f133e89f63f052496 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0-activate, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 29 09:12:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:45 compute-0 podman[85981]: 2026-01-29 09:12:45.957700218 +0000 UTC m=+0.091769788 container create 55decf3a5ce4ef1485281dee8a6bac724baaa686cef41e55a601d5fe3372d9ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 29 09:12:45 compute-0 podman[85981]: 2026-01-29 09:12:45.891542192 +0000 UTC m=+0.025611792 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:12:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c23caa4f44b1eb8261411678d297ea11424229aaf219e08912630c7113d34a4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c23caa4f44b1eb8261411678d297ea11424229aaf219e08912630c7113d34a4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c23caa4f44b1eb8261411678d297ea11424229aaf219e08912630c7113d34a4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c23caa4f44b1eb8261411678d297ea11424229aaf219e08912630c7113d34a4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c23caa4f44b1eb8261411678d297ea11424229aaf219e08912630c7113d34a4e/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:46 compute-0 podman[85981]: 2026-01-29 09:12:46.044435129 +0000 UTC m=+0.178504719 container init 55decf3a5ce4ef1485281dee8a6bac724baaa686cef41e55a601d5fe3372d9ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:12:46 compute-0 podman[85981]: 2026-01-29 09:12:46.050055971 +0000 UTC m=+0.184125531 container start 55decf3a5ce4ef1485281dee8a6bac724baaa686cef41e55a601d5fe3372d9ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 29 09:12:46 compute-0 bash[85981]: 55decf3a5ce4ef1485281dee8a6bac724baaa686cef41e55a601d5fe3372d9ed
Jan 29 09:12:46 compute-0 systemd[1]: Started Ceph osd.0 for 3fdce3ca-565d-5459-88e8-1ffe58b48437.
Jan 29 09:12:46 compute-0 ceph-osd[86001]: set uid:gid to 167:167 (ceph:ceph)
Jan 29 09:12:46 compute-0 ceph-osd[86001]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: pidfile_write: ignore empty --pid-file
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 09:12:46 compute-0 sudo[85432]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:46 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:12:46 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:46 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 09:12:46 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 09:12:46 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 29 09:12:46 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 29 09:12:46 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:12:46 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:46 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Jan 29 09:12:46 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8400 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 09:12:46 compute-0 sudo[86021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c8000 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 09:12:46 compute-0 sudo[86021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:46 compute-0 sudo[86021]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:46 compute-0 ceph-osd[86001]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 29 09:12:46 compute-0 ceph-osd[86001]: load: jerasure load: lrc 
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 09:12:46 compute-0 sudo[86050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:12:46 compute-0 sudo[86050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:46 compute-0 ceph-osd[86001]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 29 09:12:46 compute-0 ceph-osd[86001]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d237c9c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d2445f800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d2445f800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d2445f800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d2445f800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluefs mount
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluefs mount shared_bdev_used = 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: RocksDB version: 7.9.2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Git sha 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: DB SUMMARY
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: DB Session ID:  WI1Z2ESUHF52V7Q428HV
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: CURRENT file:  CURRENT
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: IDENTITY file:  IDENTITY
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                         Options.error_if_exists: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.create_if_missing: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                         Options.paranoid_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                                     Options.env: 0x556d23659ea0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                                Options.info_log: 0x556d246e28a0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_file_opening_threads: 16
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                              Options.statistics: (nil)
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.use_fsync: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.max_log_file_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                         Options.allow_fallocate: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.use_direct_reads: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.create_missing_column_families: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                              Options.db_log_dir: 
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                                 Options.wal_dir: db.wal
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.advise_random_on_open: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.write_buffer_manager: 0x556d236bab40
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                            Options.rate_limiter: (nil)
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.unordered_write: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.row_cache: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                              Options.wal_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.allow_ingest_behind: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.two_write_queues: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.manual_wal_flush: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.wal_compression: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.atomic_flush: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.log_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.allow_data_in_errors: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.db_host_id: __hostname__
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.max_background_jobs: 4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.max_background_compactions: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.max_subcompactions: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.max_open_files: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.bytes_per_sync: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.max_background_flushes: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Compression algorithms supported:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         kZSTD supported: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         kXpressCompression supported: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         kBZip2Compression supported: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         kLZ4Compression supported: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         kZlibCompression supported: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         kLZ4HCCompression supported: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         kSnappyCompression supported: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d246e2c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556d2365d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d246e2c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556d2365d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d246e2c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556d2365d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d246e2c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556d2365d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d246e2c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556d2365d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d246e2c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556d2365d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d246e2c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556d2365d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d246e2c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556d2365da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d246e2c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556d2365da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d246e2c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556d2365da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d1d57d79-64a5-4eb5-be6a-888ec56c8cfc
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677966490847, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677966494585, "job": 1, "event": "recovery_finished"}
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: freelist init
Jan 29 09:12:46 compute-0 ceph-osd[86001]: freelist _read_cfg
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluefs umount
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d2445f800 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d2445f800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d2445f800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d2445f800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bdev(0x556d2445f800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluefs mount
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluefs mount shared_bdev_used = 27262976
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: RocksDB version: 7.9.2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Git sha 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: DB SUMMARY
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: DB Session ID:  WI1Z2ESUHF52V7Q428HU
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: CURRENT file:  CURRENT
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: IDENTITY file:  IDENTITY
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                         Options.error_if_exists: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.create_if_missing: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                         Options.paranoid_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                                     Options.env: 0x556d248b0a80
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                                Options.info_log: 0x556d246e2a40
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_file_opening_threads: 16
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                              Options.statistics: (nil)
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.use_fsync: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.max_log_file_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                         Options.allow_fallocate: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.use_direct_reads: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.create_missing_column_families: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                              Options.db_log_dir: 
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                                 Options.wal_dir: db.wal
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.advise_random_on_open: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.write_buffer_manager: 0x556d236bb900
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                            Options.rate_limiter: (nil)
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.unordered_write: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.row_cache: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                              Options.wal_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.allow_ingest_behind: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.two_write_queues: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.manual_wal_flush: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.wal_compression: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.atomic_flush: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.log_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.allow_data_in_errors: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.db_host_id: __hostname__
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.max_background_jobs: 4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.max_background_compactions: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.max_subcompactions: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.max_open_files: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.bytes_per_sync: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.max_background_flushes: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Compression algorithms supported:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         kZSTD supported: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         kXpressCompression supported: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         kBZip2Compression supported: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         kLZ4Compression supported: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         kZlibCompression supported: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         kLZ4HCCompression supported: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         kSnappyCompression supported: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d246e2bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556d2365d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d246e2bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556d2365d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d246e2bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556d2365d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d246e2bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556d2365d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d246e2bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556d2365d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d246e2bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556d2365d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d246e2bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556d2365d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d246e30c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556d2365da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d246e30c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556d2365da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d246e30c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556d2365da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d1d57d79-64a5-4eb5-be6a-888ec56c8cfc
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677966551939, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677966557825, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677966, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d1d57d79-64a5-4eb5-be6a-888ec56c8cfc", "db_session_id": "WI1Z2ESUHF52V7Q428HU", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677966567738, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677966, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d1d57d79-64a5-4eb5-be6a-888ec56c8cfc", "db_session_id": "WI1Z2ESUHF52V7Q428HU", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677966576187, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677966, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d1d57d79-64a5-4eb5-be6a-888ec56c8cfc", "db_session_id": "WI1Z2ESUHF52V7Q428HU", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677966578832, "job": 1, "event": "recovery_finished"}
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x556d248c4000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: DB pointer 0x556d2489c000
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 29 09:12:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:12:46 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365da30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365da30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365da30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 29 09:12:46 compute-0 ceph-osd[86001]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 29 09:12:46 compute-0 ceph-osd[86001]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 29 09:12:46 compute-0 ceph-osd[86001]: _get_class not permitted to load lua
Jan 29 09:12:46 compute-0 ceph-osd[86001]: _get_class not permitted to load sdk
Jan 29 09:12:46 compute-0 ceph-osd[86001]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 29 09:12:46 compute-0 ceph-osd[86001]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 29 09:12:46 compute-0 ceph-osd[86001]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 29 09:12:46 compute-0 ceph-osd[86001]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 29 09:12:46 compute-0 ceph-osd[86001]: osd.0 0 load_pgs
Jan 29 09:12:46 compute-0 ceph-osd[86001]: osd.0 0 load_pgs opened 0 pgs
Jan 29 09:12:46 compute-0 ceph-osd[86001]: osd.0 0 log_to_monitors true
Jan 29 09:12:46 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0[85997]: 2026-01-29T09:12:46.649+0000 7f9f7fbe68c0 -1 osd.0 0 log_to_monitors true
Jan 29 09:12:46 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Jan 29 09:12:46 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1783505366,v1:192.168.122.100:6803/1783505366]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 29 09:12:46 compute-0 podman[86538]: 2026-01-29 09:12:46.759192374 +0000 UTC m=+0.045816768 container create 9e7332b00242724811f2d9f89a78814f21ac302e5cf2cd71a9b2270374ef4597 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 29 09:12:46 compute-0 systemd[1]: Started libpod-conmon-9e7332b00242724811f2d9f89a78814f21ac302e5cf2cd71a9b2270374ef4597.scope.
Jan 29 09:12:46 compute-0 podman[86538]: 2026-01-29 09:12:46.737328503 +0000 UTC m=+0.023952927 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:12:46 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:46 compute-0 podman[86538]: 2026-01-29 09:12:46.860768045 +0000 UTC m=+0.147392459 container init 9e7332b00242724811f2d9f89a78814f21ac302e5cf2cd71a9b2270374ef4597 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_sanderson, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:12:46 compute-0 podman[86538]: 2026-01-29 09:12:46.871656759 +0000 UTC m=+0.158281173 container start 9e7332b00242724811f2d9f89a78814f21ac302e5cf2cd71a9b2270374ef4597 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_sanderson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:12:46 compute-0 systemd[1]: libpod-9e7332b00242724811f2d9f89a78814f21ac302e5cf2cd71a9b2270374ef4597.scope: Deactivated successfully.
Jan 29 09:12:46 compute-0 magical_sanderson[86554]: 167 167
Jan 29 09:12:46 compute-0 podman[86538]: 2026-01-29 09:12:46.87798105 +0000 UTC m=+0.164605464 container attach 9e7332b00242724811f2d9f89a78814f21ac302e5cf2cd71a9b2270374ef4597 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_sanderson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:12:46 compute-0 conmon[86554]: conmon 9e7332b00242724811f2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e7332b00242724811f2d9f89a78814f21ac302e5cf2cd71a9b2270374ef4597.scope/container/memory.events
Jan 29 09:12:46 compute-0 podman[86538]: 2026-01-29 09:12:46.879541512 +0000 UTC m=+0.166165896 container died 9e7332b00242724811f2d9f89a78814f21ac302e5cf2cd71a9b2270374ef4597 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:12:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a2bc7c9efbdb86d2eb76a017675a0afc33c11f33e30bf1e600dc47eeef64f55-merged.mount: Deactivated successfully.
Jan 29 09:12:47 compute-0 podman[86538]: 2026-01-29 09:12:47.022215283 +0000 UTC m=+0.308839677 container remove 9e7332b00242724811f2d9f89a78814f21ac302e5cf2cd71a9b2270374ef4597 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_sanderson, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:12:47 compute-0 ceph-mon[75183]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:47 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:47 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:47 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 29 09:12:47 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:47 compute-0 ceph-mon[75183]: from='osd.0 [v2:192.168.122.100:6802/1783505366,v1:192.168.122.100:6803/1783505366]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 29 09:12:47 compute-0 systemd[1]: libpod-conmon-9e7332b00242724811f2d9f89a78814f21ac302e5cf2cd71a9b2270374ef4597.scope: Deactivated successfully.
Jan 29 09:12:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 29 09:12:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 09:12:47 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1783505366,v1:192.168.122.100:6803/1783505366]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 29 09:12:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Jan 29 09:12:47 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Jan 29 09:12:47 compute-0 podman[86583]: 2026-01-29 09:12:47.228891543 +0000 UTC m=+0.042791697 container create 4116d6c0bac7b66292adf01bec9e20eb06a6136d492f00beec9bf82efc90d28d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate-test, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:12:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 29 09:12:47 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1783505366,v1:192.168.122.100:6803/1783505366]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 29 09:12:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 29 09:12:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:12:47 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:12:47 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:12:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 09:12:47 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:12:47 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:12:47 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:12:47 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 09:12:47 compute-0 systemd[1]: Started libpod-conmon-4116d6c0bac7b66292adf01bec9e20eb06a6136d492f00beec9bf82efc90d28d.scope.
Jan 29 09:12:47 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d12ade3723141d00146c15125531da140c0dfdad222a11cbaa1a3b8d49c70f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d12ade3723141d00146c15125531da140c0dfdad222a11cbaa1a3b8d49c70f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d12ade3723141d00146c15125531da140c0dfdad222a11cbaa1a3b8d49c70f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d12ade3723141d00146c15125531da140c0dfdad222a11cbaa1a3b8d49c70f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d12ade3723141d00146c15125531da140c0dfdad222a11cbaa1a3b8d49c70f8/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:47 compute-0 podman[86583]: 2026-01-29 09:12:47.305668515 +0000 UTC m=+0.119568669 container init 4116d6c0bac7b66292adf01bec9e20eb06a6136d492f00beec9bf82efc90d28d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate-test, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 29 09:12:47 compute-0 podman[86583]: 2026-01-29 09:12:47.210245409 +0000 UTC m=+0.024145583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:12:47 compute-0 podman[86583]: 2026-01-29 09:12:47.310747012 +0000 UTC m=+0.124647166 container start 4116d6c0bac7b66292adf01bec9e20eb06a6136d492f00beec9bf82efc90d28d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate-test, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 29 09:12:47 compute-0 podman[86583]: 2026-01-29 09:12:47.314439092 +0000 UTC m=+0.128339246 container attach 4116d6c0bac7b66292adf01bec9e20eb06a6136d492f00beec9bf82efc90d28d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate-test, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:12:47 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate-test[86600]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 29 09:12:47 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate-test[86600]:                             [--no-systemd] [--no-tmpfs]
Jan 29 09:12:47 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate-test[86600]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 29 09:12:47 compute-0 systemd[1]: libpod-4116d6c0bac7b66292adf01bec9e20eb06a6136d492f00beec9bf82efc90d28d.scope: Deactivated successfully.
Jan 29 09:12:47 compute-0 podman[86583]: 2026-01-29 09:12:47.496841866 +0000 UTC m=+0.310742020 container died 4116d6c0bac7b66292adf01bec9e20eb06a6136d492f00beec9bf82efc90d28d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 29 09:12:47 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 29 09:12:47 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 29 09:12:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d12ade3723141d00146c15125531da140c0dfdad222a11cbaa1a3b8d49c70f8-merged.mount: Deactivated successfully.
Jan 29 09:12:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:48 compute-0 podman[86583]: 2026-01-29 09:12:48.102495485 +0000 UTC m=+0.916395639 container remove 4116d6c0bac7b66292adf01bec9e20eb06a6136d492f00beec9bf82efc90d28d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate-test, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 29 09:12:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:12:48 compute-0 ceph-mon[75183]: Deploying daemon osd.1 on compute-0
Jan 29 09:12:48 compute-0 ceph-mon[75183]: from='osd.0 [v2:192.168.122.100:6802/1783505366,v1:192.168.122.100:6803/1783505366]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 29 09:12:48 compute-0 ceph-mon[75183]: osdmap e7: 3 total, 0 up, 3 in
Jan 29 09:12:48 compute-0 ceph-mon[75183]: from='osd.0 [v2:192.168.122.100:6802/1783505366,v1:192.168.122.100:6803/1783505366]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 29 09:12:48 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:48 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:12:48 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:12:48 compute-0 systemd[1]: libpod-conmon-4116d6c0bac7b66292adf01bec9e20eb06a6136d492f00beec9bf82efc90d28d.scope: Deactivated successfully.
Jan 29 09:12:48 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 29 09:12:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 09:12:48 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1783505366,v1:192.168.122.100:6803/1783505366]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 29 09:12:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Jan 29 09:12:48 compute-0 ceph-osd[86001]: osd.0 0 done with init, starting boot process
Jan 29 09:12:48 compute-0 ceph-osd[86001]: osd.0 0 start_boot
Jan 29 09:12:48 compute-0 ceph-osd[86001]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 29 09:12:48 compute-0 ceph-osd[86001]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 29 09:12:48 compute-0 ceph-osd[86001]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 29 09:12:48 compute-0 ceph-osd[86001]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 29 09:12:48 compute-0 ceph-osd[86001]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 29 09:12:49 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Jan 29 09:12:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:12:49 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:12:49 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:12:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 09:12:49 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:12:49 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:12:49 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:12:49 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 09:12:49 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1783505366; not ready for session (expect reconnect)
Jan 29 09:12:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:12:49 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:49 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:12:49 compute-0 ceph-mon[75183]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:49 compute-0 ceph-mon[75183]: from='osd.0 [v2:192.168.122.100:6802/1783505366,v1:192.168.122.100:6803/1783505366]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 29 09:12:49 compute-0 ceph-mon[75183]: osdmap e8: 3 total, 0 up, 3 in
Jan 29 09:12:49 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:49 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:12:49 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:12:49 compute-0 systemd[1]: Reloading.
Jan 29 09:12:49 compute-0 systemd-sysv-generator[86666]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:12:49 compute-0 systemd-rc-local-generator[86662]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:12:49 compute-0 systemd[1]: Reloading.
Jan 29 09:12:49 compute-0 systemd-sysv-generator[86707]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:12:49 compute-0 systemd-rc-local-generator[86702]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:12:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:49 compute-0 systemd[1]: Starting Ceph osd.1 for 3fdce3ca-565d-5459-88e8-1ffe58b48437...
Jan 29 09:12:50 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1783505366; not ready for session (expect reconnect)
Jan 29 09:12:50 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:12:50 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:50 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:12:50 compute-0 podman[86762]: 2026-01-29 09:12:50.125408611 +0000 UTC m=+0.018208272 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:12:50 compute-0 podman[86762]: 2026-01-29 09:12:50.319515961 +0000 UTC m=+0.212315632 container create 3fbae9ac97d920112af5aa98fb3744e5a980713d55de37ef48d5a8952b4471f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 29 09:12:50 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:50 compute-0 ceph-mon[75183]: purged_snaps scrub starts
Jan 29 09:12:50 compute-0 ceph-mon[75183]: purged_snaps scrub ok
Jan 29 09:12:50 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:50 compute-0 ceph-mon[75183]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:50 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:50 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e5201918c0c19381d5ebc29069f099571fa0696d92a482e4507f993cee18613/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e5201918c0c19381d5ebc29069f099571fa0696d92a482e4507f993cee18613/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e5201918c0c19381d5ebc29069f099571fa0696d92a482e4507f993cee18613/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e5201918c0c19381d5ebc29069f099571fa0696d92a482e4507f993cee18613/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e5201918c0c19381d5ebc29069f099571fa0696d92a482e4507f993cee18613/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:51 compute-0 podman[86762]: 2026-01-29 09:12:51.036617878 +0000 UTC m=+0.929417559 container init 3fbae9ac97d920112af5aa98fb3744e5a980713d55de37ef48d5a8952b4471f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:12:51 compute-0 podman[86762]: 2026-01-29 09:12:51.042286791 +0000 UTC m=+0.935086442 container start 3fbae9ac97d920112af5aa98fb3744e5a980713d55de37ef48d5a8952b4471f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 29 09:12:51 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1783505366; not ready for session (expect reconnect)
Jan 29 09:12:51 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate[86777]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:51 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate[86777]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:12:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:51 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:12:51 compute-0 podman[86762]: 2026-01-29 09:12:51.368172898 +0000 UTC m=+1.260972579 container attach 3fbae9ac97d920112af5aa98fb3744e5a980713d55de37ef48d5a8952b4471f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 09:12:51 compute-0 bash[86762]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:51 compute-0 bash[86762]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:51 compute-0 lvm[86860]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:12:51 compute-0 lvm[86860]: VG ceph_vg0 finished
Jan 29 09:12:51 compute-0 lvm[86863]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:12:51 compute-0 lvm[86863]: VG ceph_vg1 finished
Jan 29 09:12:51 compute-0 lvm[86865]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:12:51 compute-0 lvm[86865]: VG ceph_vg2 finished
Jan 29 09:12:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:51 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:51 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate[86777]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 29 09:12:51 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate[86777]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:51 compute-0 bash[86762]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 29 09:12:51 compute-0 bash[86762]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:51 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate[86777]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:51 compute-0 bash[86762]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:12:52 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate[86777]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 29 09:12:52 compute-0 bash[86762]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 29 09:12:52 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate[86777]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 29 09:12:52 compute-0 bash[86762]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 29 09:12:52 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate[86777]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:52 compute-0 bash[86762]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:52 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate[86777]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:52 compute-0 bash[86762]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:52 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate[86777]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 29 09:12:52 compute-0 bash[86762]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 29 09:12:52 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate[86777]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 29 09:12:52 compute-0 bash[86762]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 29 09:12:52 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate[86777]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 29 09:12:52 compute-0 bash[86762]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 29 09:12:52 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1783505366; not ready for session (expect reconnect)
Jan 29 09:12:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:12:52 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:52 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:12:52 compute-0 systemd[1]: libpod-3fbae9ac97d920112af5aa98fb3744e5a980713d55de37ef48d5a8952b4471f9.scope: Deactivated successfully.
Jan 29 09:12:52 compute-0 systemd[1]: libpod-3fbae9ac97d920112af5aa98fb3744e5a980713d55de37ef48d5a8952b4471f9.scope: Consumed 1.289s CPU time.
Jan 29 09:12:52 compute-0 conmon[86777]: conmon 3fbae9ac97d920112af5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3fbae9ac97d920112af5aa98fb3744e5a980713d55de37ef48d5a8952b4471f9.scope/container/memory.events
Jan 29 09:12:52 compute-0 podman[86762]: 2026-01-29 09:12:52.177207068 +0000 UTC m=+2.070006739 container died 3fbae9ac97d920112af5aa98fb3744e5a980713d55de37ef48d5a8952b4471f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 29 09:12:52 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e5201918c0c19381d5ebc29069f099571fa0696d92a482e4507f993cee18613-merged.mount: Deactivated successfully.
Jan 29 09:12:53 compute-0 podman[86762]: 2026-01-29 09:12:53.01154995 +0000 UTC m=+2.904349601 container remove 3fbae9ac97d920112af5aa98fb3744e5a980713d55de37ef48d5a8952b4471f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:12:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:12:53 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1783505366; not ready for session (expect reconnect)
Jan 29 09:12:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:12:53 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:53 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:12:53 compute-0 podman[87016]: 2026-01-29 09:12:53.164155069 +0000 UTC m=+0.020303809 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:12:53 compute-0 ceph-mon[75183]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:53 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:54 compute-0 podman[87016]: 2026-01-29 09:12:54.001661537 +0000 UTC m=+0.857810267 container create 6b017a93c8d5e4b16e9cb40f33f5c3f62905e0a712a4ab48e2c2fcded963e2d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:12:54 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1783505366; not ready for session (expect reconnect)
Jan 29 09:12:54 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:12:54 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:54 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca21362fe5c77cef5cf3d89d19ada7925016713444b2b02bb0362a0f030e5b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca21362fe5c77cef5cf3d89d19ada7925016713444b2b02bb0362a0f030e5b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca21362fe5c77cef5cf3d89d19ada7925016713444b2b02bb0362a0f030e5b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca21362fe5c77cef5cf3d89d19ada7925016713444b2b02bb0362a0f030e5b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca21362fe5c77cef5cf3d89d19ada7925016713444b2b02bb0362a0f030e5b4/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:54 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:54 compute-0 podman[87016]: 2026-01-29 09:12:54.975906346 +0000 UTC m=+1.832055106 container init 6b017a93c8d5e4b16e9cb40f33f5c3f62905e0a712a4ab48e2c2fcded963e2d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:12:54 compute-0 podman[87016]: 2026-01-29 09:12:54.980947732 +0000 UTC m=+1.837096472 container start 6b017a93c8d5e4b16e9cb40f33f5c3f62905e0a712a4ab48e2c2fcded963e2d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 09:12:54 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:54 compute-0 ceph-mon[75183]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:54 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:55 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1783505366; not ready for session (expect reconnect)
Jan 29 09:12:55 compute-0 bash[87016]: 6b017a93c8d5e4b16e9cb40f33f5c3f62905e0a712a4ab48e2c2fcded963e2d7
Jan 29 09:12:55 compute-0 systemd[1]: Started Ceph osd.1 for 3fdce3ca-565d-5459-88e8-1ffe58b48437.
Jan 29 09:12:55 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:12:55 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:55 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:12:55 compute-0 ceph-osd[87035]: set uid:gid to 167:167 (ceph:ceph)
Jan 29 09:12:55 compute-0 ceph-osd[87035]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 29 09:12:55 compute-0 ceph-osd[87035]: pidfile_write: ignore empty --pid-file
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 09:12:55 compute-0 sudo[86050]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:55 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 09:12:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:12:55
Jan 29 09:12:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:12:55 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:12:55 compute-0 ceph-mgr[75473]: [balancer INFO root] No pools available
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:55 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb732400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb732400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb732400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb732400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb732400 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb732000 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 09:12:56 compute-0 ceph-osd[87035]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 29 09:12:56 compute-0 ceph-osd[87035]: load: jerasure load: lrc 
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 09:12:56 compute-0 ceph-osd[87035]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 29 09:12:56 compute-0 ceph-osd[87035]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 09:12:56 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1783505366; not ready for session (expect reconnect)
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fb733c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fc3d3800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fc3d3800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fc3d3800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fc3d3800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluefs mount
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluefs mount shared_bdev_used = 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: RocksDB version: 7.9.2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Git sha 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: DB SUMMARY
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: DB Session ID:  Y2R5I31240ENA5OPZRTP
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: CURRENT file:  CURRENT
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: IDENTITY file:  IDENTITY
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                         Options.error_if_exists: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.create_if_missing: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                         Options.paranoid_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                                     Options.env: 0x5579fb5c3ea0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                                Options.info_log: 0x5579fc61e8a0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_file_opening_threads: 16
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                              Options.statistics: (nil)
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.use_fsync: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.max_log_file_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                         Options.allow_fallocate: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.use_direct_reads: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.create_missing_column_families: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                              Options.db_log_dir: 
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                                 Options.wal_dir: db.wal
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.advise_random_on_open: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.write_buffer_manager: 0x5579fc4c4b40
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                            Options.rate_limiter: (nil)
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.unordered_write: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.row_cache: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                              Options.wal_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.allow_ingest_behind: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.two_write_queues: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.manual_wal_flush: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.wal_compression: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.atomic_flush: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.log_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.allow_data_in_errors: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.db_host_id: __hostname__
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.max_background_jobs: 4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.max_background_compactions: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.max_subcompactions: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.max_open_files: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.bytes_per_sync: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.max_background_flushes: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Compression algorithms supported:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         kZSTD supported: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         kXpressCompression supported: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         kBZip2Compression supported: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         kLZ4Compression supported: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         kZlibCompression supported: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         kLZ4HCCompression supported: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         kSnappyCompression supported: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5579fc61ec60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5579fb5c78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5579fc61ec60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5579fb5c78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5579fc61ec60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5579fb5c78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5579fc61ec60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5579fb5c78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5579fc61ec60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5579fb5c78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5579fc61ec60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5579fb5c78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5579fc61ec60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5579fb5c78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5579fc61ec80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5579fb5c7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5579fc61ec80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5579fb5c7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5579fc61ec80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5579fb5c7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bb40b596-f44b-4e7c-a86e-2e39953eb003
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677976225882, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677976227297, "job": 1, "event": "recovery_finished"}
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: freelist init
Jan 29 09:12:56 compute-0 ceph-osd[87035]: freelist _read_cfg
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluefs umount
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fc3d3800 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fc3d3800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fc3d3800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fc3d3800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bdev(0x5579fc3d3800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluefs mount
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluefs mount shared_bdev_used = 27262976
Jan 29 09:12:56 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: RocksDB version: 7.9.2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Git sha 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: DB SUMMARY
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: DB Session ID:  Y2R5I31240ENA5OPZRTO
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: CURRENT file:  CURRENT
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: IDENTITY file:  IDENTITY
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                         Options.error_if_exists: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.create_if_missing: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                         Options.paranoid_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                                     Options.env: 0x5579fb5c3d50
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                                Options.info_log: 0x5579fc61fb00
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_file_opening_threads: 16
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                              Options.statistics: (nil)
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.use_fsync: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.max_log_file_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                         Options.allow_fallocate: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.use_direct_reads: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.create_missing_column_families: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                              Options.db_log_dir: 
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                                 Options.wal_dir: db.wal
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.advise_random_on_open: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.write_buffer_manager: 0x5579fc4c5900
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                            Options.rate_limiter: (nil)
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.unordered_write: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.row_cache: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                              Options.wal_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.allow_ingest_behind: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.two_write_queues: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.manual_wal_flush: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.wal_compression: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.atomic_flush: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.log_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.allow_data_in_errors: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.db_host_id: __hostname__
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.max_background_jobs: 4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.max_background_compactions: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.max_subcompactions: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.max_open_files: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.bytes_per_sync: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.max_background_flushes: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Compression algorithms supported:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         kZSTD supported: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         kXpressCompression supported: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         kBZip2Compression supported: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         kLZ4Compression supported: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         kZlibCompression supported: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         kLZ4HCCompression supported: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         kSnappyCompression supported: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5579fc658220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5579fb5c7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5579fc658220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5579fb5c7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5579fc658220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5579fb5c7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5579fc658220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5579fb5c7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5579fc658220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5579fb5c7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5579fc658220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5579fb5c7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5579fc658220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5579fb5c7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5579fc658300)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5579fb5c74b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5579fc658300)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5579fb5c74b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5579fc658300)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5579fb5c74b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bb40b596-f44b-4e7c-a86e-2e39953eb003
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677976272489, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 29 09:12:56 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:12:56 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:56 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:12:56 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:56 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:12:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:12:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:12:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:12:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:12:56 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:12:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:12:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:12:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:12:56 compute-0 ceph-osd[87035]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677976834034, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677976, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bb40b596-f44b-4e7c-a86e-2e39953eb003", "db_session_id": "Y2R5I31240ENA5OPZRTO", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:12:57 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1783505366; not ready for session (expect reconnect)
Jan 29 09:12:57 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:12:57 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:57 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:12:57 compute-0 ceph-osd[87035]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677977423947, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677976, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bb40b596-f44b-4e7c-a86e-2e39953eb003", "db_session_id": "Y2R5I31240ENA5OPZRTO", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:12:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:12:57 compute-0 ceph-osd[87035]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677977490804, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677977, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bb40b596-f44b-4e7c-a86e-2e39953eb003", "db_session_id": "Y2R5I31240ENA5OPZRTO", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:12:57 compute-0 ceph-osd[87035]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677977561503, "job": 1, "event": "recovery_finished"}
Jan 29 09:12:57 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 29 09:12:57 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Jan 29 09:12:57 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 29 09:12:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:12:57 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:57 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Jan 29 09:12:57 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Jan 29 09:12:57 compute-0 sudo[87465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:12:57 compute-0 sudo[87465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:57 compute-0 sudo[87465]: pam_unix(sudo:session): session closed for user root
Jan 29 09:12:57 compute-0 sudo[87490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:12:57 compute-0 sudo[87490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:12:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:57 compute-0 ceph-mon[75183]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:57 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:57 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:57 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:12:58 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5579fc621c00
Jan 29 09:12:58 compute-0 ceph-osd[87035]: rocksdb: DB pointer 0x5579fc7d8000
Jan 29 09:12:58 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 29 09:12:58 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 29 09:12:58 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 29 09:12:58 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:12:58 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1.9 total, 1.9 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.9 total, 1.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.9 total, 1.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.9 total, 1.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.9 total, 1.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.9 total, 1.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.9 total, 1.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.9 total, 1.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.9 total, 1.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c74b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.9 total, 1.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c74b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.9 total, 1.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c74b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.9 total, 1.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.9 total, 1.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 29 09:12:58 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1783505366; not ready for session (expect reconnect)
Jan 29 09:12:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:12:58 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:58 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:12:58 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 29 09:12:58 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 29 09:12:58 compute-0 ceph-osd[87035]: _get_class not permitted to load lua
Jan 29 09:12:58 compute-0 ceph-osd[87035]: _get_class not permitted to load sdk
Jan 29 09:12:58 compute-0 ceph-osd[87035]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 29 09:12:58 compute-0 ceph-osd[87035]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 29 09:12:58 compute-0 ceph-osd[87035]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 29 09:12:58 compute-0 ceph-osd[87035]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 29 09:12:58 compute-0 ceph-osd[87035]: osd.1 0 load_pgs
Jan 29 09:12:58 compute-0 ceph-osd[87035]: osd.1 0 load_pgs opened 0 pgs
Jan 29 09:12:58 compute-0 ceph-osd[87035]: osd.1 0 log_to_monitors true
Jan 29 09:12:58 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1[87031]: 2026-01-29T09:12:58.151+0000 7f9ce316f8c0 -1 osd.1 0 log_to_monitors true
Jan 29 09:12:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Jan 29 09:12:58 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3500797224,v1:192.168.122.100:6807/3500797224]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 29 09:12:58 compute-0 podman[87587]: 2026-01-29 09:12:58.227384446 +0000 UTC m=+0.022425356 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:12:58 compute-0 podman[87587]: 2026-01-29 09:12:58.410541021 +0000 UTC m=+0.205581911 container create 0200129e96af6f6d20a6ae6966435bc0525f5cde7bf0628067075b8fc4740f99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_dubinsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 29 09:12:58 compute-0 systemd[1]: Started libpod-conmon-0200129e96af6f6d20a6ae6966435bc0525f5cde7bf0628067075b8fc4740f99.scope.
Jan 29 09:12:58 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:58 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:12:58 compute-0 podman[87587]: 2026-01-29 09:12:58.563994393 +0000 UTC m=+0.359035313 container init 0200129e96af6f6d20a6ae6966435bc0525f5cde7bf0628067075b8fc4740f99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_dubinsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:12:58 compute-0 podman[87587]: 2026-01-29 09:12:58.571831755 +0000 UTC m=+0.366872635 container start 0200129e96af6f6d20a6ae6966435bc0525f5cde7bf0628067075b8fc4740f99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_dubinsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:12:58 compute-0 silly_dubinsky[87604]: 167 167
Jan 29 09:12:58 compute-0 systemd[1]: libpod-0200129e96af6f6d20a6ae6966435bc0525f5cde7bf0628067075b8fc4740f99.scope: Deactivated successfully.
Jan 29 09:12:58 compute-0 podman[87587]: 2026-01-29 09:12:58.66386834 +0000 UTC m=+0.458909230 container attach 0200129e96af6f6d20a6ae6966435bc0525f5cde7bf0628067075b8fc4740f99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_dubinsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 29 09:12:58 compute-0 podman[87587]: 2026-01-29 09:12:58.665107343 +0000 UTC m=+0.460148243 container died 0200129e96af6f6d20a6ae6966435bc0525f5cde7bf0628067075b8fc4740f99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 29 09:12:58 compute-0 sudo[87644]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnjfcubjoqiejmgzkycfaabrtwthwgin ; /usr/bin/python3'
Jan 29 09:12:58 compute-0 sudo[87644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:12:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 29 09:12:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 09:12:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c8ef5fbd9c2902f9672fa3b059909be94a4d04e57717f0a04ef34b9e74c7841-merged.mount: Deactivated successfully.
Jan 29 09:12:58 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:12:58 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 29 09:12:58 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:12:58 compute-0 ceph-mon[75183]: Deploying daemon osd.2 on compute-0
Jan 29 09:12:58 compute-0 ceph-mon[75183]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:12:58 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:58 compute-0 ceph-mon[75183]: from='osd.1 [v2:192.168.122.100:6806/3500797224,v1:192.168.122.100:6807/3500797224]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 29 09:12:59 compute-0 python3[87646]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:12:59 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3500797224,v1:192.168.122.100:6807/3500797224]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 29 09:12:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Jan 29 09:12:59 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1783505366; not ready for session (expect reconnect)
Jan 29 09:12:59 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 29 09:12:59 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 29 09:12:59 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Jan 29 09:12:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 29 09:12:59 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3500797224,v1:192.168.122.100:6807/3500797224]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 29 09:12:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 29 09:12:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:12:59 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:12:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:12:59 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:12:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 09:12:59 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:12:59 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:12:59 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:12:59 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 09:12:59 compute-0 podman[87587]: 2026-01-29 09:12:59.216999851 +0000 UTC m=+1.012040741 container remove 0200129e96af6f6d20a6ae6966435bc0525f5cde7bf0628067075b8fc4740f99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:12:59 compute-0 systemd[1]: libpod-conmon-0200129e96af6f6d20a6ae6966435bc0525f5cde7bf0628067075b8fc4740f99.scope: Deactivated successfully.
Jan 29 09:12:59 compute-0 podman[87649]: 2026-01-29 09:12:59.353821483 +0000 UTC m=+0.283773420 container create dd22bc2d0073263eeda964098daab70923904a148b0e395ca18d5e84cf86cf7d (image=quay.io/ceph/ceph:v20, name=wonderful_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Jan 29 09:12:59 compute-0 podman[87649]: 2026-01-29 09:12:59.270499765 +0000 UTC m=+0.200451702 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:12:59 compute-0 systemd[1]: Started libpod-conmon-dd22bc2d0073263eeda964098daab70923904a148b0e395ca18d5e84cf86cf7d.scope.
Jan 29 09:12:59 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f2f59fe76536c5925d5a052b6bb855408918199cd3b0ac51c7f986ccd95a710/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f2f59fe76536c5925d5a052b6bb855408918199cd3b0ac51c7f986ccd95a710/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f2f59fe76536c5925d5a052b6bb855408918199cd3b0ac51c7f986ccd95a710/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:59 compute-0 podman[87676]: 2026-01-29 09:12:59.621449058 +0000 UTC m=+0.187945825 container create 53fe3bcbc3b7c1aef876e5e4115a477ab4289afb15d86a58d2729fb66261566f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate-test, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:12:59 compute-0 podman[87676]: 2026-01-29 09:12:59.534463669 +0000 UTC m=+0.100960466 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:12:59 compute-0 podman[87649]: 2026-01-29 09:12:59.669420962 +0000 UTC m=+0.599372899 container init dd22bc2d0073263eeda964098daab70923904a148b0e395ca18d5e84cf86cf7d (image=quay.io/ceph/ceph:v20, name=wonderful_swartz, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 29 09:12:59 compute-0 podman[87649]: 2026-01-29 09:12:59.674600382 +0000 UTC m=+0.604552299 container start dd22bc2d0073263eeda964098daab70923904a148b0e395ca18d5e84cf86cf7d (image=quay.io/ceph/ceph:v20, name=wonderful_swartz, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Jan 29 09:12:59 compute-0 podman[87649]: 2026-01-29 09:12:59.76601613 +0000 UTC m=+0.695968047 container attach dd22bc2d0073263eeda964098daab70923904a148b0e395ca18d5e84cf86cf7d (image=quay.io/ceph/ceph:v20, name=wonderful_swartz, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:12:59 compute-0 systemd[1]: Started libpod-conmon-53fe3bcbc3b7c1aef876e5e4115a477ab4289afb15d86a58d2729fb66261566f.scope.
Jan 29 09:12:59 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:12:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa0e0bf45ad0f6749e36e65b4ff98090162d18fec08b4195453f865d556f1b2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa0e0bf45ad0f6749e36e65b4ff98090162d18fec08b4195453f865d556f1b2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa0e0bf45ad0f6749e36e65b4ff98090162d18fec08b4195453f865d556f1b2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa0e0bf45ad0f6749e36e65b4ff98090162d18fec08b4195453f865d556f1b2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa0e0bf45ad0f6749e36e65b4ff98090162d18fec08b4195453f865d556f1b2b/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 29 09:12:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:13:00 compute-0 podman[87676]: 2026-01-29 09:13:00.012649267 +0000 UTC m=+0.579146044 container init 53fe3bcbc3b7c1aef876e5e4115a477ab4289afb15d86a58d2729fb66261566f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate-test, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:13:00 compute-0 podman[87676]: 2026-01-29 09:13:00.018651849 +0000 UTC m=+0.585148616 container start 53fe3bcbc3b7c1aef876e5e4115a477ab4289afb15d86a58d2729fb66261566f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 29 09:13:00 compute-0 podman[87676]: 2026-01-29 09:13:00.060940681 +0000 UTC m=+0.627437478 container attach 53fe3bcbc3b7c1aef876e5e4115a477ab4289afb15d86a58d2729fb66261566f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 29 09:13:00 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 29 09:13:00 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 09:13:00 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1783505366; not ready for session (expect reconnect)
Jan 29 09:13:00 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate-test[87718]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 29 09:13:00 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate-test[87718]:                             [--no-systemd] [--no-tmpfs]
Jan 29 09:13:00 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate-test[87718]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 29 09:13:00 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:13:00 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:00 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 29 09:13:00 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/469527207' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 29 09:13:00 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:13:00 compute-0 wonderful_swartz[87692]: 
Jan 29 09:13:00 compute-0 wonderful_swartz[87692]: {"fsid":"3fdce3ca-565d-5459-88e8-1ffe58b48437","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":82,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":9,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1769677957,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-29T09:11:36:099108+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-29T09:12:57.948210+0000","services":{}},"progress_events":{}}
Jan 29 09:13:00 compute-0 systemd[1]: libpod-53fe3bcbc3b7c1aef876e5e4115a477ab4289afb15d86a58d2729fb66261566f.scope: Deactivated successfully.
Jan 29 09:13:00 compute-0 podman[87676]: 2026-01-29 09:13:00.208764061 +0000 UTC m=+0.775260838 container died 53fe3bcbc3b7c1aef876e5e4115a477ab4289afb15d86a58d2729fb66261566f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate-test, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:13:00 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3500797224,v1:192.168.122.100:6807/3500797224]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 29 09:13:00 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e10 e10: 3 total, 0 up, 3 in
Jan 29 09:13:00 compute-0 ceph-osd[87035]: osd.1 0 done with init, starting boot process
Jan 29 09:13:00 compute-0 ceph-osd[87035]: osd.1 0 start_boot
Jan 29 09:13:00 compute-0 ceph-osd[87035]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 29 09:13:00 compute-0 ceph-osd[87035]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 29 09:13:00 compute-0 ceph-osd[87035]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 29 09:13:00 compute-0 ceph-osd[87035]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 29 09:13:00 compute-0 ceph-osd[87035]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 29 09:13:00 compute-0 systemd[1]: libpod-dd22bc2d0073263eeda964098daab70923904a148b0e395ca18d5e84cf86cf7d.scope: Deactivated successfully.
Jan 29 09:13:00 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 0 up, 3 in
Jan 29 09:13:00 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:13:00 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:00 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:13:00 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:00 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 09:13:00 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:00 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:13:00 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:13:00 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 09:13:00 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3500797224; not ready for session (expect reconnect)
Jan 29 09:13:00 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:13:00 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:00 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:13:00 compute-0 ceph-mon[75183]: from='osd.1 [v2:192.168.122.100:6806/3500797224,v1:192.168.122.100:6807/3500797224]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 29 09:13:00 compute-0 ceph-mon[75183]: osdmap e9: 3 total, 0 up, 3 in
Jan 29 09:13:00 compute-0 ceph-mon[75183]: from='osd.1 [v2:192.168.122.100:6806/3500797224,v1:192.168.122.100:6807/3500797224]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 29 09:13:00 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:00 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:00 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:00 compute-0 ceph-mon[75183]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:13:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa0e0bf45ad0f6749e36e65b4ff98090162d18fec08b4195453f865d556f1b2b-merged.mount: Deactivated successfully.
Jan 29 09:13:00 compute-0 podman[87649]: 2026-01-29 09:13:00.405153893 +0000 UTC m=+1.335105820 container died dd22bc2d0073263eeda964098daab70923904a148b0e395ca18d5e84cf86cf7d (image=quay.io/ceph/ceph:v20, name=wonderful_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:13:00 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:13:00 compute-0 podman[87676]: 2026-01-29 09:13:00.760905936 +0000 UTC m=+1.327402703 container remove 53fe3bcbc3b7c1aef876e5e4115a477ab4289afb15d86a58d2729fb66261566f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 29 09:13:00 compute-0 systemd[1]: libpod-conmon-53fe3bcbc3b7c1aef876e5e4115a477ab4289afb15d86a58d2729fb66261566f.scope: Deactivated successfully.
Jan 29 09:13:01 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1783505366; not ready for session (expect reconnect)
Jan 29 09:13:01 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:13:01 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:01 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:13:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f2f59fe76536c5925d5a052b6bb855408918199cd3b0ac51c7f986ccd95a710-merged.mount: Deactivated successfully.
Jan 29 09:13:01 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3500797224; not ready for session (expect reconnect)
Jan 29 09:13:01 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:13:01 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:01 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:13:01 compute-0 ceph-mon[75183]: purged_snaps scrub starts
Jan 29 09:13:01 compute-0 ceph-mon[75183]: purged_snaps scrub ok
Jan 29 09:13:01 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:01 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/469527207' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 29 09:13:01 compute-0 ceph-mon[75183]: from='osd.1 [v2:192.168.122.100:6806/3500797224,v1:192.168.122.100:6807/3500797224]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 29 09:13:01 compute-0 ceph-mon[75183]: osdmap e10: 3 total, 0 up, 3 in
Jan 29 09:13:01 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:01 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:01 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:01 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:01 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:01 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:01 compute-0 podman[87731]: 2026-01-29 09:13:01.563430279 +0000 UTC m=+1.323801105 container remove dd22bc2d0073263eeda964098daab70923904a148b0e395ca18d5e84cf86cf7d (image=quay.io/ceph/ceph:v20, name=wonderful_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 09:13:01 compute-0 systemd[1]: libpod-conmon-dd22bc2d0073263eeda964098daab70923904a148b0e395ca18d5e84cf86cf7d.scope: Deactivated successfully.
Jan 29 09:13:01 compute-0 sudo[87644]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:01 compute-0 systemd[1]: Reloading.
Jan 29 09:13:01 compute-0 systemd-rc-local-generator[87792]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:13:01 compute-0 systemd-sysv-generator[87797]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:13:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:13:01 compute-0 systemd[1]: Reloading.
Jan 29 09:13:02 compute-0 systemd-sysv-generator[87845]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:13:02 compute-0 systemd-rc-local-generator[87837]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:13:02 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1783505366; not ready for session (expect reconnect)
Jan 29 09:13:02 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:13:02 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:02 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:13:02 compute-0 systemd[1]: Starting Ceph osd.2 for 3fdce3ca-565d-5459-88e8-1ffe58b48437...
Jan 29 09:13:02 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3500797224; not ready for session (expect reconnect)
Jan 29 09:13:02 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:13:02 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:02 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:13:02 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:13:02 compute-0 podman[87898]: 2026-01-29 09:13:02.469263472 +0000 UTC m=+0.020410942 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:13:02 compute-0 podman[87898]: 2026-01-29 09:13:02.578426149 +0000 UTC m=+0.129573589 container create 61d470b780ed0b65eb31fb72ffac2e50feb117cfc4fad7e519cedd9cb5dd5039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:13:02 compute-0 ceph-mon[75183]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:13:02 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:02 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:02 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce8012ca8e2bfe292b98d216cb4aa7efa87f5be6ca9275b53ac8c30b14934b98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce8012ca8e2bfe292b98d216cb4aa7efa87f5be6ca9275b53ac8c30b14934b98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce8012ca8e2bfe292b98d216cb4aa7efa87f5be6ca9275b53ac8c30b14934b98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce8012ca8e2bfe292b98d216cb4aa7efa87f5be6ca9275b53ac8c30b14934b98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce8012ca8e2bfe292b98d216cb4aa7efa87f5be6ca9275b53ac8c30b14934b98/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:02 compute-0 podman[87898]: 2026-01-29 09:13:02.994625592 +0000 UTC m=+0.545773032 container init 61d470b780ed0b65eb31fb72ffac2e50feb117cfc4fad7e519cedd9cb5dd5039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 09:13:03 compute-0 podman[87898]: 2026-01-29 09:13:03.001080607 +0000 UTC m=+0.552228047 container start 61d470b780ed0b65eb31fb72ffac2e50feb117cfc4fad7e519cedd9cb5dd5039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 09:13:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:13:03 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1783505366; not ready for session (expect reconnect)
Jan 29 09:13:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:13:03 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:03 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:13:03 compute-0 podman[87898]: 2026-01-29 09:13:03.167969932 +0000 UTC m=+0.719117392 container attach 61d470b780ed0b65eb31fb72ffac2e50feb117cfc4fad7e519cedd9cb5dd5039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:13:03 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate[87913]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:13:03 compute-0 bash[87898]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:13:03 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate[87913]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:13:03 compute-0 bash[87898]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:13:03 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3500797224; not ready for session (expect reconnect)
Jan 29 09:13:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:13:03 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:03 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:13:03 compute-0 lvm[87998]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:13:03 compute-0 lvm[87998]: VG ceph_vg0 finished
Jan 29 09:13:03 compute-0 lvm[87999]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:13:03 compute-0 lvm[87999]: VG ceph_vg1 finished
Jan 29 09:13:03 compute-0 lvm[88001]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:13:03 compute-0 lvm[88001]: VG ceph_vg2 finished
Jan 29 09:13:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:13:04 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate[87913]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 29 09:13:04 compute-0 bash[87898]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 29 09:13:04 compute-0 bash[87898]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:13:04 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate[87913]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:13:04 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate[87913]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:13:04 compute-0 bash[87898]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 09:13:04 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1783505366; not ready for session (expect reconnect)
Jan 29 09:13:04 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:13:04 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:04 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:13:04 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate[87913]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 29 09:13:04 compute-0 bash[87898]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 29 09:13:04 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate[87913]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 29 09:13:04 compute-0 bash[87898]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 29 09:13:04 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate[87913]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 29 09:13:04 compute-0 bash[87898]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 29 09:13:04 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate[87913]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 29 09:13:04 compute-0 bash[87898]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 29 09:13:04 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate[87913]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 29 09:13:04 compute-0 bash[87898]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 29 09:13:04 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate[87913]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 29 09:13:04 compute-0 bash[87898]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 29 09:13:04 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate[87913]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 29 09:13:04 compute-0 bash[87898]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 29 09:13:04 compute-0 systemd[1]: libpod-61d470b780ed0b65eb31fb72ffac2e50feb117cfc4fad7e519cedd9cb5dd5039.scope: Deactivated successfully.
Jan 29 09:13:04 compute-0 systemd[1]: libpod-61d470b780ed0b65eb31fb72ffac2e50feb117cfc4fad7e519cedd9cb5dd5039.scope: Consumed 1.690s CPU time.
Jan 29 09:13:04 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3500797224; not ready for session (expect reconnect)
Jan 29 09:13:04 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:13:04 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:04 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:13:04 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:04 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:04 compute-0 podman[88115]: 2026-01-29 09:13:04.368292804 +0000 UTC m=+0.036567178 container died 61d470b780ed0b65eb31fb72ffac2e50feb117cfc4fad7e519cedd9cb5dd5039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 29 09:13:04 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:13:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce8012ca8e2bfe292b98d216cb4aa7efa87f5be6ca9275b53ac8c30b14934b98-merged.mount: Deactivated successfully.
Jan 29 09:13:05 compute-0 podman[88115]: 2026-01-29 09:13:05.033977473 +0000 UTC m=+0.702251817 container remove 61d470b780ed0b65eb31fb72ffac2e50feb117cfc4fad7e519cedd9cb5dd5039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:13:05 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1783505366; not ready for session (expect reconnect)
Jan 29 09:13:05 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:13:05 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:13:05 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:05 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3500797224; not ready for session (expect reconnect)
Jan 29 09:13:05 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:13:05 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:05 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:13:05 compute-0 podman[88174]: 2026-01-29 09:13:05.364022223 +0000 UTC m=+0.092677754 container create 1b320510371f97573f40e17ad46374b299fa0c5f5ad7087908c16b2967f29cf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 29 09:13:05 compute-0 ceph-mon[75183]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:13:05 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:05 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:05 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:05 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:05 compute-0 podman[88174]: 2026-01-29 09:13:05.301014312 +0000 UTC m=+0.029669873 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:13:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db5d5864854a98788aa8b7b4b54f38eee5d7efc97f50f51b3bbc145193444d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db5d5864854a98788aa8b7b4b54f38eee5d7efc97f50f51b3bbc145193444d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db5d5864854a98788aa8b7b4b54f38eee5d7efc97f50f51b3bbc145193444d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db5d5864854a98788aa8b7b4b54f38eee5d7efc97f50f51b3bbc145193444d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db5d5864854a98788aa8b7b4b54f38eee5d7efc97f50f51b3bbc145193444d4/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:05 compute-0 podman[88174]: 2026-01-29 09:13:05.463169979 +0000 UTC m=+0.191825530 container init 1b320510371f97573f40e17ad46374b299fa0c5f5ad7087908c16b2967f29cf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:13:05 compute-0 podman[88174]: 2026-01-29 09:13:05.483396735 +0000 UTC m=+0.212052266 container start 1b320510371f97573f40e17ad46374b299fa0c5f5ad7087908c16b2967f29cf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 29 09:13:05 compute-0 bash[88174]: 1b320510371f97573f40e17ad46374b299fa0c5f5ad7087908c16b2967f29cf7
Jan 29 09:13:05 compute-0 systemd[1]: Started Ceph osd.2 for 3fdce3ca-565d-5459-88e8-1ffe58b48437.
Jan 29 09:13:05 compute-0 ceph-osd[88193]: set uid:gid to 167:167 (ceph:ceph)
Jan 29 09:13:05 compute-0 ceph-osd[88193]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 29 09:13:05 compute-0 ceph-osd[88193]: pidfile_write: ignore empty --pid-file
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 09:13:05 compute-0 sudo[87490]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 09:13:05 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a400 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 09:13:05 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:05 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900a000 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 09:13:05 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:05 compute-0 ceph-osd[88193]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Jan 29 09:13:05 compute-0 ceph-osd[88193]: load: jerasure load: lrc 
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 09:13:05 compute-0 sudo[88217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:13:05 compute-0 sudo[88217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:13:05 compute-0 ceph-osd[88193]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 29 09:13:05 compute-0 ceph-osd[88193]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:13:05 compute-0 sudo[88217]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 09:13:05 compute-0 sudo[88261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:13:05 compute-0 sudo[88261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:13:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde900bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde9ca1800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde9ca1800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde9ca1800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bdev(0x55bde9ca1800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bluefs mount
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bluefs mount shared_bdev_used = 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: RocksDB version: 7.9.2
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Git sha 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: DB SUMMARY
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: DB Session ID:  EN3JRDZ6MPS7LAJKI7NJ
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: CURRENT file:  CURRENT
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: IDENTITY file:  IDENTITY
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                         Options.error_if_exists: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                       Options.create_if_missing: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                         Options.paranoid_checks: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                                     Options.env: 0x55bde8e9bea0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                                Options.info_log: 0x55bde9eec8a0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.max_file_opening_threads: 16
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                              Options.statistics: (nil)
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                               Options.use_fsync: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                       Options.max_log_file_size: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                         Options.allow_fallocate: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                        Options.use_direct_reads: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.create_missing_column_families: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                              Options.db_log_dir: 
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                                 Options.wal_dir: db.wal
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                   Options.advise_random_on_open: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                    Options.write_buffer_manager: 0x55bde8f00b40
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                            Options.rate_limiter: (nil)
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.unordered_write: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                               Options.row_cache: None
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                              Options.wal_filter: None
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.allow_ingest_behind: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.two_write_queues: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.manual_wal_flush: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.wal_compression: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.atomic_flush: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                 Options.log_readahead_size: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.allow_data_in_errors: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.db_host_id: __hostname__
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.max_background_jobs: 4
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.max_background_compactions: -1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.max_subcompactions: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                          Options.max_open_files: -1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                          Options.bytes_per_sync: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.max_background_flushes: -1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Compression algorithms supported:
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         kZSTD supported: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         kXpressCompression supported: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         kBZip2Compression supported: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         kLZ4Compression supported: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         kZlibCompression supported: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         kLZ4HCCompression supported: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         kSnappyCompression supported: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter: None
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bde9eecc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bde8e9f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.compression: LZ4
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.num_levels: 7
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:           Options.merge_operator: None
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter: None
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bde9eecc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bde8e9f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.compression: LZ4
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.num_levels: 7
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:           Options.merge_operator: None
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter: None
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bde9eecc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bde8e9f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.compression: LZ4
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.num_levels: 7
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:           Options.merge_operator: None
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter: None
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bde9eecc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bde8e9f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.compression: LZ4
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.num_levels: 7
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:13:05 compute-0 ceph-osd[88193]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:           Options.merge_operator: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bde9eecc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bde8e9f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.compression: LZ4
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.num_levels: 7
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:           Options.merge_operator: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bde9eecc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bde8e9f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.compression: LZ4
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.num_levels: 7
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:           Options.merge_operator: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bde9eecc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bde8e9f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.compression: LZ4
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.num_levels: 7
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:           Options.merge_operator: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bde9eecc80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bde8e9fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.compression: LZ4
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.num_levels: 7
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:           Options.merge_operator: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bde9eecc80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bde8e9fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.compression: LZ4
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.num_levels: 7
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:           Options.merge_operator: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bde9eecc80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bde8e9fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.compression: LZ4
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.num_levels: 7
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7a0f313f-48fa-4596-ac65-fcd8389c6b71
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677985988002, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677985995301, "job": 1, "event": "recovery_finished"}
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: freelist init
Jan 29 09:13:06 compute-0 ceph-osd[88193]: freelist _read_cfg
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bluefs umount
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bdev(0x55bde9ca1800 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bdev(0x55bde9ca1800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bdev(0x55bde9ca1800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bdev(0x55bde9ca1800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bdev(0x55bde9ca1800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bluefs mount
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bluefs mount shared_bdev_used = 27262976
Jan 29 09:13:06 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: RocksDB version: 7.9.2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Git sha 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: DB SUMMARY
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: DB Session ID:  EN3JRDZ6MPS7LAJKI7NI
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: CURRENT file:  CURRENT
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: IDENTITY file:  IDENTITY
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                         Options.error_if_exists: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.create_if_missing: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                         Options.paranoid_checks: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                                     Options.env: 0x55bdea0bca80
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                                Options.info_log: 0x55bde9eec960
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.max_file_opening_threads: 16
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                              Options.statistics: (nil)
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                               Options.use_fsync: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.max_log_file_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                         Options.allow_fallocate: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.use_direct_reads: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.create_missing_column_families: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                              Options.db_log_dir: 
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                                 Options.wal_dir: db.wal
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.advise_random_on_open: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.write_buffer_manager: 0x55bde8f01900
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                            Options.rate_limiter: (nil)
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.unordered_write: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                               Options.row_cache: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                              Options.wal_filter: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.allow_ingest_behind: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.two_write_queues: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.manual_wal_flush: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.wal_compression: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.atomic_flush: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.log_readahead_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.allow_data_in_errors: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.db_host_id: __hostname__
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.max_background_jobs: 4
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.max_background_compactions: -1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.max_subcompactions: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.max_open_files: -1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.bytes_per_sync: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.max_background_flushes: -1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Compression algorithms supported:
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         kZSTD supported: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         kXpressCompression supported: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         kBZip2Compression supported: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         kLZ4Compression supported: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         kZlibCompression supported: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         kLZ4HCCompression supported: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         kSnappyCompression supported: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bde9eecbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bde8e9f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.compression: LZ4
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.num_levels: 7
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:           Options.merge_operator: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bde9eecbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bde8e9f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.compression: LZ4
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.num_levels: 7
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:           Options.merge_operator: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bde9eecbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bde8e9f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.compression: LZ4
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.num_levels: 7
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:           Options.merge_operator: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bde9eecbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bde8e9f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.compression: LZ4
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.num_levels: 7
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:           Options.merge_operator: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bde9eecbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bde8e9f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.compression: LZ4
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.num_levels: 7
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:           Options.merge_operator: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bde9eecbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bde8e9f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.compression: LZ4
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.num_levels: 7
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:           Options.merge_operator: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bde9eecbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bde8e9f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.compression: LZ4
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.num_levels: 7
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:           Options.merge_operator: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bde9eed0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bde8e9fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.compression: LZ4
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.num_levels: 7
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:           Options.merge_operator: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bde9eed0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bde8e9fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.compression: LZ4
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.num_levels: 7
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:           Options.merge_operator: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bde9eed0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bde8e9fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.compression: LZ4
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.num_levels: 7
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.bloom_locality: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                               Options.ttl: 2592000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                       Options.enable_blob_files: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                           Options.min_blob_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7a0f313f-48fa-4596-ac65-fcd8389c6b71
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677986049921, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 29 09:13:06 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1783505366; not ready for session (expect reconnect)
Jan 29 09:13:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:13:06 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:06 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677986180659, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677986, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a0f313f-48fa-4596-ac65-fcd8389c6b71", "db_session_id": "EN3JRDZ6MPS7LAJKI7NI", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677986254843, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677986, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a0f313f-48fa-4596-ac65-fcd8389c6b71", "db_session_id": "EN3JRDZ6MPS7LAJKI7NI", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:13:06 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3500797224; not ready for session (expect reconnect)
Jan 29 09:13:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:13:06 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:06 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:13:06 compute-0 podman[88672]: 2026-01-29 09:13:06.234461487 +0000 UTC m=+0.034733920 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:13:06 compute-0 podman[88672]: 2026-01-29 09:13:06.388504199 +0000 UTC m=+0.188776602 container create 2de9f6975f08fbd3ac6c61f2bc35e964fc19eb1d4e59f7cfca96bc9a8a03a5eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677986433826, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677986, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7a0f313f-48fa-4596-ac65-fcd8389c6b71", "db_session_id": "EN3JRDZ6MPS7LAJKI7NI", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769677986489287, "job": 1, "event": "recovery_finished"}
Jan 29 09:13:06 compute-0 ceph-osd[88193]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 29 09:13:06 compute-0 systemd[1]: Started libpod-conmon-2de9f6975f08fbd3ac6c61f2bc35e964fc19eb1d4e59f7cfca96bc9a8a03a5eb.scope.
Jan 29 09:13:06 compute-0 ceph-mgr[75473]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 09:13:06 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:06 compute-0 podman[88672]: 2026-01-29 09:13:06.728071554 +0000 UTC m=+0.528343977 container init 2de9f6975f08fbd3ac6c61f2bc35e964fc19eb1d4e59f7cfca96bc9a8a03a5eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 09:13:06 compute-0 podman[88672]: 2026-01-29 09:13:06.734542713 +0000 UTC m=+0.534815116 container start 2de9f6975f08fbd3ac6c61f2bc35e964fc19eb1d4e59f7cfca96bc9a8a03a5eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True)
Jan 29 09:13:06 compute-0 fervent_austin[88690]: 167 167
Jan 29 09:13:06 compute-0 systemd[1]: libpod-2de9f6975f08fbd3ac6c61f2bc35e964fc19eb1d4e59f7cfca96bc9a8a03a5eb.scope: Deactivated successfully.
Jan 29 09:13:06 compute-0 conmon[88690]: conmon 2de9f6975f08fbd3ac6c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2de9f6975f08fbd3ac6c61f2bc35e964fc19eb1d4e59f7cfca96bc9a8a03a5eb.scope/container/memory.events
Jan 29 09:13:06 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:06 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:06 compute-0 ceph-mon[75183]: pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:13:06 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:06 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:06 compute-0 podman[88672]: 2026-01-29 09:13:06.967561192 +0000 UTC m=+0.767833615 container attach 2de9f6975f08fbd3ac6c61f2bc35e964fc19eb1d4e59f7cfca96bc9a8a03a5eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_austin, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 29 09:13:06 compute-0 podman[88672]: 2026-01-29 09:13:06.96902496 +0000 UTC m=+0.769297363 container died 2de9f6975f08fbd3ac6c61f2bc35e964fc19eb1d4e59f7cfca96bc9a8a03a5eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_austin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:13:07 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1783505366; not ready for session (expect reconnect)
Jan 29 09:13:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:13:07 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:07 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:13:07 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3500797224; not ready for session (expect reconnect)
Jan 29 09:13:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:13:07 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:07 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:13:07 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55bdea106000
Jan 29 09:13:07 compute-0 ceph-osd[88193]: rocksdb: DB pointer 0x55bdea0a6000
Jan 29 09:13:07 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 29 09:13:07 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Jan 29 09:13:07 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Jan 29 09:13:07 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:13:07 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9fa30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9fa30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9fa30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 29 09:13:07 compute-0 ceph-osd[88193]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 29 09:13:07 compute-0 ceph-osd[88193]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 29 09:13:07 compute-0 ceph-osd[88193]: _get_class not permitted to load lua
Jan 29 09:13:07 compute-0 ceph-osd[88193]: _get_class not permitted to load sdk
Jan 29 09:13:07 compute-0 ceph-osd[88193]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 29 09:13:07 compute-0 ceph-osd[88193]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 29 09:13:07 compute-0 ceph-osd[88193]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 29 09:13:07 compute-0 ceph-osd[88193]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 29 09:13:07 compute-0 ceph-osd[88193]: osd.2 0 load_pgs
Jan 29 09:13:07 compute-0 ceph-osd[88193]: osd.2 0 load_pgs opened 0 pgs
Jan 29 09:13:07 compute-0 ceph-osd[88193]: osd.2 0 log_to_monitors true
Jan 29 09:13:07 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2[88189]: 2026-01-29T09:13:07.610+0000 7f6e8239b8c0 -1 osd.2 0 log_to_monitors true
Jan 29 09:13:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Jan 29 09:13:07 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2045548742,v1:192.168.122.100:6811/2045548742]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 29 09:13:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-986d69644b3d5e2ac553b80c9f43afca9825106debb90d67d7a6781c4353412c-merged.mount: Deactivated successfully.
Jan 29 09:13:07 compute-0 podman[88672]: 2026-01-29 09:13:07.882955018 +0000 UTC m=+1.683227421 container remove 2de9f6975f08fbd3ac6c61f2bc35e964fc19eb1d4e59f7cfca96bc9a8a03a5eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:13:07 compute-0 systemd[1]: libpod-conmon-2de9f6975f08fbd3ac6c61f2bc35e964fc19eb1d4e59f7cfca96bc9a8a03a5eb.scope: Deactivated successfully.
Jan 29 09:13:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:13:07 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:07 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:07 compute-0 ceph-mon[75183]: from='osd.2 [v2:192.168.122.100:6810/2045548742,v1:192.168.122.100:6811/2045548742]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 29 09:13:08 compute-0 ceph-osd[86001]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 4.821 iops: 1234.239 elapsed_sec: 2.431
Jan 29 09:13:08 compute-0 ceph-osd[86001]: log_channel(cluster) log [WRN] : OSD bench result of 1234.239264 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 29 09:13:08 compute-0 ceph-osd[86001]: osd.0 0 waiting for initial osdmap
Jan 29 09:13:08 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0[85997]: 2026-01-29T09:13:08.004+0000 7f9f7c37a640 -1 osd.0 0 waiting for initial osdmap
Jan 29 09:13:08 compute-0 ceph-osd[86001]: osd.0 10 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 29 09:13:08 compute-0 ceph-osd[86001]: osd.0 10 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 29 09:13:08 compute-0 ceph-osd[86001]: osd.0 10 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 29 09:13:08 compute-0 ceph-osd[86001]: osd.0 10 check_osdmap_features require_osd_release unknown -> tentacle
Jan 29 09:13:08 compute-0 podman[88748]: 2026-01-29 09:13:08.100482461 +0000 UTC m=+0.118877202 container create 8c22c3887facd9dcb3e790a4548a6b4b6460601fbe2f7318a72041e934f30075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hertz, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 29 09:13:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:13:08 compute-0 podman[88748]: 2026-01-29 09:13:08.018985458 +0000 UTC m=+0.037380199 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:13:08 compute-0 ceph-osd[86001]: osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 29 09:13:08 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-0[85997]: 2026-01-29T09:13:08.136+0000 7f9f7696d640 -1 osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 29 09:13:08 compute-0 ceph-osd[86001]: osd.0 10 set_numa_affinity not setting numa affinity
Jan 29 09:13:08 compute-0 ceph-osd[86001]: osd.0 10 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Jan 29 09:13:08 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1783505366; not ready for session (expect reconnect)
Jan 29 09:13:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:13:08 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:08 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 09:13:08 compute-0 systemd[1]: Started libpod-conmon-8c22c3887facd9dcb3e790a4548a6b4b6460601fbe2f7318a72041e934f30075.scope.
Jan 29 09:13:08 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad90262921314292d4780758b0e25c1d441e60d2f89c8947944d89a2a24f4cda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad90262921314292d4780758b0e25c1d441e60d2f89c8947944d89a2a24f4cda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad90262921314292d4780758b0e25c1d441e60d2f89c8947944d89a2a24f4cda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad90262921314292d4780758b0e25c1d441e60d2f89c8947944d89a2a24f4cda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:08 compute-0 podman[88748]: 2026-01-29 09:13:08.227770152 +0000 UTC m=+0.246164923 container init 8c22c3887facd9dcb3e790a4548a6b4b6460601fbe2f7318a72041e934f30075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hertz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 29 09:13:08 compute-0 podman[88748]: 2026-01-29 09:13:08.243788961 +0000 UTC m=+0.262183702 container start 8c22c3887facd9dcb3e790a4548a6b4b6460601fbe2f7318a72041e934f30075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:13:08 compute-0 podman[88748]: 2026-01-29 09:13:08.277610477 +0000 UTC m=+0.296005218 container attach 8c22c3887facd9dcb3e790a4548a6b4b6460601fbe2f7318a72041e934f30075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hertz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 29 09:13:08 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3500797224; not ready for session (expect reconnect)
Jan 29 09:13:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 29 09:13:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 09:13:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:13:08 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:08 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:13:08 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2045548742,v1:192.168.122.100:6811/2045548742]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 29 09:13:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Jan 29 09:13:08 compute-0 ceph-mon[75183]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1783505366,v1:192.168.122.100:6803/1783505366] boot
Jan 29 09:13:08 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Jan 29 09:13:08 compute-0 ceph-osd[86001]: osd.0 11 state: booting -> active
Jan 29 09:13:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 29 09:13:08 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2045548742,v1:192.168.122.100:6811/2045548742]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 29 09:13:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e11 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 29 09:13:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 09:13:08 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:13:08 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 09:13:08 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:08 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 09:13:08 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:13:08 compute-0 ceph-mgr[75473]: [devicehealth INFO root] creating mgr pool
Jan 29 09:13:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Jan 29 09:13:08 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 29 09:13:08 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 29 09:13:08 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 29 09:13:09 compute-0 ceph-mon[75183]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 09:13:09 compute-0 ceph-mon[75183]: OSD bench result of 1234.239264 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 29 09:13:09 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:09 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:09 compute-0 ceph-mon[75183]: from='osd.2 [v2:192.168.122.100:6810/2045548742,v1:192.168.122.100:6811/2045548742]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 29 09:13:09 compute-0 ceph-mon[75183]: osd.0 [v2:192.168.122.100:6802/1783505366,v1:192.168.122.100:6803/1783505366] boot
Jan 29 09:13:09 compute-0 ceph-mon[75183]: osdmap e11: 3 total, 1 up, 3 in
Jan 29 09:13:09 compute-0 ceph-mon[75183]: from='osd.2 [v2:192.168.122.100:6810/2045548742,v1:192.168.122.100:6811/2045548742]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 29 09:13:09 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 09:13:09 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:09 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:09 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 29 09:13:09 compute-0 lvm[88843]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:13:09 compute-0 lvm[88843]: VG ceph_vg0 finished
Jan 29 09:13:09 compute-0 lvm[88846]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:13:09 compute-0 lvm[88846]: VG ceph_vg1 finished
Jan 29 09:13:09 compute-0 lvm[88848]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:13:09 compute-0 lvm[88848]: VG ceph_vg2 finished
Jan 29 09:13:09 compute-0 lvm[88849]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:13:09 compute-0 lvm[88849]: VG ceph_vg1 finished
Jan 29 09:13:09 compute-0 lvm[88851]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:13:09 compute-0 lvm[88851]: VG ceph_vg1 finished
Jan 29 09:13:09 compute-0 vigorous_hertz[88765]: {}
Jan 29 09:13:09 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3500797224; not ready for session (expect reconnect)
Jan 29 09:13:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:13:09 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:09 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:13:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 29 09:13:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 09:13:09 compute-0 systemd[1]: libpod-8c22c3887facd9dcb3e790a4548a6b4b6460601fbe2f7318a72041e934f30075.scope: Deactivated successfully.
Jan 29 09:13:09 compute-0 podman[88748]: 2026-01-29 09:13:09.352008294 +0000 UTC m=+1.370403055 container died 8c22c3887facd9dcb3e790a4548a6b4b6460601fbe2f7318a72041e934f30075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hertz, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:13:09 compute-0 systemd[1]: libpod-8c22c3887facd9dcb3e790a4548a6b4b6460601fbe2f7318a72041e934f30075.scope: Consumed 1.658s CPU time.
Jan 29 09:13:09 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2045548742,v1:192.168.122.100:6811/2045548742]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 29 09:13:09 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 29 09:13:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Jan 29 09:13:09 compute-0 ceph-osd[88193]: osd.2 0 done with init, starting boot process
Jan 29 09:13:09 compute-0 ceph-osd[88193]: osd.2 0 start_boot
Jan 29 09:13:09 compute-0 ceph-osd[88193]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 29 09:13:09 compute-0 ceph-osd[88193]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 29 09:13:09 compute-0 ceph-osd[88193]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 29 09:13:09 compute-0 ceph-osd[88193]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 29 09:13:09 compute-0 ceph-osd[88193]: osd.2 0  bench count 12288000 bsize 4 KiB
Jan 29 09:13:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Jan 29 09:13:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 29 09:13:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 29 09:13:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 29 09:13:09 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Jan 29 09:13:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:13:09 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 09:13:09 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:09 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:13:09 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 09:13:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Jan 29 09:13:09 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 29 09:13:09 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2045548742; not ready for session (expect reconnect)
Jan 29 09:13:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 09:13:09 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:09 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 09:13:09 compute-0 ceph-osd[86001]: osd.0 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 29 09:13:09 compute-0 ceph-osd[86001]: osd.0 12 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 29 09:13:09 compute-0 ceph-osd[86001]: osd.0 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 29 09:13:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 29 09:13:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad90262921314292d4780758b0e25c1d441e60d2f89c8947944d89a2a24f4cda-merged.mount: Deactivated successfully.
Jan 29 09:13:10 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3500797224; not ready for session (expect reconnect)
Jan 29 09:13:10 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:13:10 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:10 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:13:10 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:10 compute-0 ceph-mon[75183]: from='osd.2 [v2:192.168.122.100:6810/2045548742,v1:192.168.122.100:6811/2045548742]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 29 09:13:10 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 29 09:13:10 compute-0 ceph-mon[75183]: osdmap e12: 3 total, 1 up, 3 in
Jan 29 09:13:10 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:10 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:10 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 29 09:13:10 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:10 compute-0 podman[88748]: 2026-01-29 09:13:10.577915686 +0000 UTC m=+2.596310427 container remove 8c22c3887facd9dcb3e790a4548a6b4b6460601fbe2f7318a72041e934f30075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hertz, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:13:10 compute-0 systemd[1]: libpod-conmon-8c22c3887facd9dcb3e790a4548a6b4b6460601fbe2f7318a72041e934f30075.scope: Deactivated successfully.
Jan 29 09:13:10 compute-0 sudo[88261]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:10 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:13:10 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 29 09:13:10 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:10 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:13:10 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 29 09:13:10 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Jan 29 09:13:10 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Jan 29 09:13:10 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:13:10 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:10 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 09:13:10 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:10 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:13:10 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 09:13:10 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:10 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2045548742; not ready for session (expect reconnect)
Jan 29 09:13:10 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 09:13:10 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:10 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 09:13:10 compute-0 sudo[88866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:13:10 compute-0 sudo[88866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:13:10 compute-0 sudo[88866]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:11 compute-0 sudo[88891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:13:11 compute-0 sudo[88891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:13:11 compute-0 sudo[88891]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:11 compute-0 sudo[88917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 29 09:13:11 compute-0 sudo[88917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:13:11 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3500797224; not ready for session (expect reconnect)
Jan 29 09:13:11 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:13:11 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:11 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:13:11 compute-0 ceph-mon[75183]: purged_snaps scrub starts
Jan 29 09:13:11 compute-0 ceph-mon[75183]: purged_snaps scrub ok
Jan 29 09:13:11 compute-0 ceph-mon[75183]: pgmap v40: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 29 09:13:11 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:11 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:11 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 29 09:13:11 compute-0 ceph-mon[75183]: osdmap e13: 3 total, 1 up, 3 in
Jan 29 09:13:11 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:11 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:11 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:11 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:11 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:11 compute-0 podman[88983]: 2026-01-29 09:13:11.820831544 +0000 UTC m=+0.201223267 container exec 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 29 09:13:11 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2045548742; not ready for session (expect reconnect)
Jan 29 09:13:11 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 09:13:11 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 09:13:11 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:11 compute-0 podman[88983]: 2026-01-29 09:13:11.938632677 +0000 UTC m=+0.319024400 container exec_died 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:13:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v42: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 29 09:13:12 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3500797224; not ready for session (expect reconnect)
Jan 29 09:13:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:13:12 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:12 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:13:12 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:12 compute-0 ceph-mon[75183]: pgmap v42: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 29 09:13:12 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:12 compute-0 ceph-osd[87035]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 7.570 iops: 1937.813 elapsed_sec: 1.548
Jan 29 09:13:12 compute-0 ceph-osd[87035]: log_channel(cluster) log [WRN] : OSD bench result of 1937.813390 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 29 09:13:12 compute-0 ceph-osd[87035]: osd.1 0 waiting for initial osdmap
Jan 29 09:13:12 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1[87031]: 2026-01-29T09:13:12.751+0000 7f9cdf903640 -1 osd.1 0 waiting for initial osdmap
Jan 29 09:13:12 compute-0 ceph-osd[87035]: osd.1 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 29 09:13:12 compute-0 ceph-osd[87035]: osd.1 13 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 29 09:13:12 compute-0 ceph-osd[87035]: osd.1 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 29 09:13:12 compute-0 ceph-osd[87035]: osd.1 13 check_osdmap_features require_osd_release unknown -> tentacle
Jan 29 09:13:12 compute-0 sudo[88917]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:13:12 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:13:12 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-1[87031]: 2026-01-29T09:13:12.814+0000 7f9cd9ef6640 -1 osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 29 09:13:12 compute-0 ceph-osd[87035]: osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 29 09:13:12 compute-0 ceph-osd[87035]: osd.1 13 set_numa_affinity not setting numa affinity
Jan 29 09:13:12 compute-0 ceph-osd[87035]: osd.1 13 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Jan 29 09:13:12 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:12 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2045548742; not ready for session (expect reconnect)
Jan 29 09:13:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 09:13:12 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:12 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 09:13:12 compute-0 sudo[89134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:13:12 compute-0 sudo[89134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:13:12 compute-0 sudo[89134]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:12 compute-0 sudo[89159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- inventory --format=json-pretty --filter-for-batch
Jan 29 09:13:12 compute-0 sudo[89159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:13:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:13:13 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3500797224; not ready for session (expect reconnect)
Jan 29 09:13:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:13:13 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:13 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 09:13:13 compute-0 podman[89197]: 2026-01-29 09:13:13.356211186 +0000 UTC m=+0.048534421 container create cfa4f132993b74875e3f0138c9b69484f976d166ae6032d76f09b51f96e4bde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:13:13 compute-0 systemd[1]: Started libpod-conmon-cfa4f132993b74875e3f0138c9b69484f976d166ae6032d76f09b51f96e4bde3.scope.
Jan 29 09:13:13 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:13 compute-0 podman[89197]: 2026-01-29 09:13:13.330917554 +0000 UTC m=+0.023240809 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:13:13 compute-0 podman[89197]: 2026-01-29 09:13:13.470074066 +0000 UTC m=+0.162397291 container init cfa4f132993b74875e3f0138c9b69484f976d166ae6032d76f09b51f96e4bde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_thompson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:13:13 compute-0 podman[89197]: 2026-01-29 09:13:13.482732287 +0000 UTC m=+0.175055522 container start cfa4f132993b74875e3f0138c9b69484f976d166ae6032d76f09b51f96e4bde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 29 09:13:13 compute-0 jolly_thompson[89212]: 167 167
Jan 29 09:13:13 compute-0 systemd[1]: libpod-cfa4f132993b74875e3f0138c9b69484f976d166ae6032d76f09b51f96e4bde3.scope: Deactivated successfully.
Jan 29 09:13:13 compute-0 podman[89197]: 2026-01-29 09:13:13.502924436 +0000 UTC m=+0.195247661 container attach cfa4f132993b74875e3f0138c9b69484f976d166ae6032d76f09b51f96e4bde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_thompson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 09:13:13 compute-0 podman[89197]: 2026-01-29 09:13:13.506198731 +0000 UTC m=+0.198521966 container died cfa4f132993b74875e3f0138c9b69484f976d166ae6032d76f09b51f96e4bde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_thompson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:13:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-930dde9405a7d00fb418e09d4319be2c09ae0722a338bce40c7d47bc30430f6d-merged.mount: Deactivated successfully.
Jan 29 09:13:13 compute-0 podman[89197]: 2026-01-29 09:13:13.624729973 +0000 UTC m=+0.317053208 container remove cfa4f132993b74875e3f0138c9b69484f976d166ae6032d76f09b51f96e4bde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 29 09:13:13 compute-0 systemd[1]: libpod-conmon-cfa4f132993b74875e3f0138c9b69484f976d166ae6032d76f09b51f96e4bde3.scope: Deactivated successfully.
Jan 29 09:13:13 compute-0 podman[89235]: 2026-01-29 09:13:13.759359466 +0000 UTC m=+0.053534462 container create ffcf2cfda86579974accc51841d3b5fa0683b8bb29393fc0af5ea64cc6aea09f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_yonath, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:13:13 compute-0 ceph-osd[87035]: osd.1 13 tick checking mon for new map
Jan 29 09:13:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 29 09:13:13 compute-0 systemd[1]: Started libpod-conmon-ffcf2cfda86579974accc51841d3b5fa0683b8bb29393fc0af5ea64cc6aea09f.scope.
Jan 29 09:13:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Jan 29 09:13:13 compute-0 podman[89235]: 2026-01-29 09:13:13.734371622 +0000 UTC m=+0.028546638 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:13:13 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd96231598898ccdedc1535d4998836e0dd3eeacce9a53dcd9fef63ab43629dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd96231598898ccdedc1535d4998836e0dd3eeacce9a53dcd9fef63ab43629dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd96231598898ccdedc1535d4998836e0dd3eeacce9a53dcd9fef63ab43629dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd96231598898ccdedc1535d4998836e0dd3eeacce9a53dcd9fef63ab43629dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:13 compute-0 ceph-mon[75183]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/3500797224,v1:192.168.122.100:6807/3500797224] boot
Jan 29 09:13:13 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Jan 29 09:13:13 compute-0 ceph-osd[87035]: osd.1 14 state: booting -> active
Jan 29 09:13:13 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[12,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 09:13:13 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 09:13:13 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:13 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 09:13:13 compute-0 ceph-mon[75183]: OSD bench result of 1937.813390 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 29 09:13:13 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:13 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:13 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:13 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:13 compute-0 podman[89235]: 2026-01-29 09:13:13.866338375 +0000 UTC m=+0.160513391 container init ffcf2cfda86579974accc51841d3b5fa0683b8bb29393fc0af5ea64cc6aea09f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_yonath, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 09:13:13 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2045548742; not ready for session (expect reconnect)
Jan 29 09:13:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 09:13:13 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:13 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 09:13:13 compute-0 podman[89235]: 2026-01-29 09:13:13.880510576 +0000 UTC m=+0.174685572 container start ffcf2cfda86579974accc51841d3b5fa0683b8bb29393fc0af5ea64cc6aea09f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_yonath, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 29 09:13:13 compute-0 podman[89235]: 2026-01-29 09:13:13.902949714 +0000 UTC m=+0.197124740 container attach ffcf2cfda86579974accc51841d3b5fa0683b8bb29393fc0af5ea64cc6aea09f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_yonath, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:13:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 29 09:13:14 compute-0 clever_yonath[89251]: [
Jan 29 09:13:14 compute-0 clever_yonath[89251]:     {
Jan 29 09:13:14 compute-0 clever_yonath[89251]:         "available": false,
Jan 29 09:13:14 compute-0 clever_yonath[89251]:         "being_replaced": false,
Jan 29 09:13:14 compute-0 clever_yonath[89251]:         "ceph_device_lvm": false,
Jan 29 09:13:14 compute-0 clever_yonath[89251]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 29 09:13:14 compute-0 clever_yonath[89251]:         "lsm_data": {},
Jan 29 09:13:14 compute-0 clever_yonath[89251]:         "lvs": [],
Jan 29 09:13:14 compute-0 clever_yonath[89251]:         "path": "/dev/sr0",
Jan 29 09:13:14 compute-0 clever_yonath[89251]:         "rejected_reasons": [
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "Has a FileSystem",
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "Insufficient space (<5GB)"
Jan 29 09:13:14 compute-0 clever_yonath[89251]:         ],
Jan 29 09:13:14 compute-0 clever_yonath[89251]:         "sys_api": {
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "actuators": null,
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "device_nodes": [
Jan 29 09:13:14 compute-0 clever_yonath[89251]:                 "sr0"
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             ],
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "devname": "sr0",
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "human_readable_size": "482.00 KB",
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "id_bus": "ata",
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "model": "QEMU DVD-ROM",
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "nr_requests": "2",
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "parent": "/dev/sr0",
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "partitions": {},
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "path": "/dev/sr0",
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "removable": "1",
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "rev": "2.5+",
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "ro": "0",
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "rotational": "1",
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "sas_address": "",
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "sas_device_handle": "",
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "scheduler_mode": "mq-deadline",
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "sectors": 0,
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "sectorsize": "2048",
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "size": 493568.0,
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "support_discard": "2048",
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "type": "disk",
Jan 29 09:13:14 compute-0 clever_yonath[89251]:             "vendor": "QEMU"
Jan 29 09:13:14 compute-0 clever_yonath[89251]:         }
Jan 29 09:13:14 compute-0 clever_yonath[89251]:     }
Jan 29 09:13:14 compute-0 clever_yonath[89251]: ]
Jan 29 09:13:14 compute-0 systemd[1]: libpod-ffcf2cfda86579974accc51841d3b5fa0683b8bb29393fc0af5ea64cc6aea09f.scope: Deactivated successfully.
Jan 29 09:13:14 compute-0 podman[89235]: 2026-01-29 09:13:14.497829582 +0000 UTC m=+0.792004598 container died ffcf2cfda86579974accc51841d3b5fa0683b8bb29393fc0af5ea64cc6aea09f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_yonath, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:13:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd96231598898ccdedc1535d4998836e0dd3eeacce9a53dcd9fef63ab43629dd-merged.mount: Deactivated successfully.
Jan 29 09:13:14 compute-0 podman[89235]: 2026-01-29 09:13:14.750728131 +0000 UTC m=+1.044903127 container remove ffcf2cfda86579974accc51841d3b5fa0683b8bb29393fc0af5ea64cc6aea09f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_yonath, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:13:14 compute-0 systemd[1]: libpod-conmon-ffcf2cfda86579974accc51841d3b5fa0683b8bb29393fc0af5ea64cc6aea09f.scope: Deactivated successfully.
Jan 29 09:13:14 compute-0 sudo[89159]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:14 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:13:14 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 29 09:13:14 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:14 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2045548742; not ready for session (expect reconnect)
Jan 29 09:13:14 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:13:14 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 09:13:14 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:14 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 09:13:14 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Jan 29 09:13:14 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Jan 29 09:13:14 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 09:13:14 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:14 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 09:13:14 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=14/15 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[12,14)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:15 compute-0 ceph-mon[75183]: osd.1 [v2:192.168.122.100:6806/3500797224,v1:192.168.122.100:6807/3500797224] boot
Jan 29 09:13:15 compute-0 ceph-mon[75183]: osdmap e14: 3 total, 2 up, 3 in
Jan 29 09:13:15 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 09:13:15 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:15 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:15 compute-0 ceph-mon[75183]: pgmap v44: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 29 09:13:15 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:15 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:15 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 29 09:13:15 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 29 09:13:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 29 09:13:15 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 29 09:13:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Jan 29 09:13:15 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 29 09:13:15 compute-0 ceph-mgr[75473]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43685k
Jan 29 09:13:15 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43685k
Jan 29 09:13:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 29 09:13:15 compute-0 ceph-mgr[75473]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44733508: error parsing value: Value '44733508' is below minimum 939524096
Jan 29 09:13:15 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44733508: error parsing value: Value '44733508' is below minimum 939524096
Jan 29 09:13:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:13:15 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:13:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:13:15 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:13:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:13:15 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:13:15 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:13:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:13:15 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:13:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:13:15 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:13:15 compute-0 sudo[90055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:13:15 compute-0 sudo[90055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:13:15 compute-0 sudo[90055]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:15 compute-0 sudo[90080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:13:15 compute-0 sudo[90080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:13:15 compute-0 ceph-mgr[75473]: [devicehealth INFO root] creating main.db for devicehealth
Jan 29 09:13:15 compute-0 ceph-mgr[75473]: [devicehealth INFO root] Check health
Jan 29 09:13:15 compute-0 ceph-mgr[75473]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Jan 29 09:13:15 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 29 09:13:15 compute-0 podman[90115]: 2026-01-29 09:13:15.642826928 +0000 UTC m=+0.113856771 container create 4a161027bfed86e0c1a00a6c1cb43ee67491b0962f38ae1bbf22d85715f707e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_chaum, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 29 09:13:15 compute-0 podman[90115]: 2026-01-29 09:13:15.552543965 +0000 UTC m=+0.023573838 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:13:15 compute-0 sudo[90141]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Jan 29 09:13:15 compute-0 sudo[90141]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 29 09:13:15 compute-0 sudo[90141]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Jan 29 09:13:15 compute-0 sudo[90141]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:15 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 29 09:13:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 29 09:13:15 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 29 09:13:15 compute-0 systemd[1]: Started libpod-conmon-4a161027bfed86e0c1a00a6c1cb43ee67491b0962f38ae1bbf22d85715f707e0.scope.
Jan 29 09:13:15 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:15 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2045548742; not ready for session (expect reconnect)
Jan 29 09:13:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 09:13:15 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:15 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 09:13:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 29 09:13:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 29 09:13:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Jan 29 09:13:16 compute-0 podman[90115]: 2026-01-29 09:13:16.0307394 +0000 UTC m=+0.501769273 container init 4a161027bfed86e0c1a00a6c1cb43ee67491b0962f38ae1bbf22d85715f707e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_chaum, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 29 09:13:16 compute-0 podman[90115]: 2026-01-29 09:13:16.037784025 +0000 UTC m=+0.508813868 container start 4a161027bfed86e0c1a00a6c1cb43ee67491b0962f38ae1bbf22d85715f707e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_chaum, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:13:16 compute-0 angry_chaum[90146]: 167 167
Jan 29 09:13:16 compute-0 systemd[1]: libpod-4a161027bfed86e0c1a00a6c1cb43ee67491b0962f38ae1bbf22d85715f707e0.scope: Deactivated successfully.
Jan 29 09:13:16 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Jan 29 09:13:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 09:13:16 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:16 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 09:13:16 compute-0 podman[90115]: 2026-01-29 09:13:16.122491812 +0000 UTC m=+0.593521675 container attach 4a161027bfed86e0c1a00a6c1cb43ee67491b0962f38ae1bbf22d85715f707e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 29 09:13:16 compute-0 podman[90115]: 2026-01-29 09:13:16.122976874 +0000 UTC m=+0.594006717 container died 4a161027bfed86e0c1a00a6c1cb43ee67491b0962f38ae1bbf22d85715f707e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_chaum, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:13:16 compute-0 ceph-mon[75183]: osdmap e15: 3 total, 2 up, 3 in
Jan 29 09:13:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 29 09:13:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 29 09:13:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 29 09:13:16 compute-0 ceph-mon[75183]: Adjusting osd_memory_target on compute-0 to 43685k
Jan 29 09:13:16 compute-0 ceph-mon[75183]: Unable to set osd_memory_target on compute-0 to 44733508: error parsing value: Value '44733508' is below minimum 939524096
Jan 29 09:13:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:13:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:13:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:13:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:13:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:13:16 compute-0 ceph-mon[75183]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 29 09:13:16 compute-0 ceph-mon[75183]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 29 09:13:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 29 09:13:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-4325b4d79033149e6ab59e48bfc0add369713211912b064906dcbd1efaf975f5-merged.mount: Deactivated successfully.
Jan 29 09:13:16 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.ucpkkb(active, since 80s)
Jan 29 09:13:16 compute-0 podman[90115]: 2026-01-29 09:13:16.507575819 +0000 UTC m=+0.978605662 container remove 4a161027bfed86e0c1a00a6c1cb43ee67491b0962f38ae1bbf22d85715f707e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_chaum, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 09:13:16 compute-0 systemd[1]: libpod-conmon-4a161027bfed86e0c1a00a6c1cb43ee67491b0962f38ae1bbf22d85715f707e0.scope: Deactivated successfully.
Jan 29 09:13:16 compute-0 podman[90172]: 2026-01-29 09:13:16.623512734 +0000 UTC m=+0.021206876 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:13:16 compute-0 podman[90172]: 2026-01-29 09:13:16.760383456 +0000 UTC m=+0.158077578 container create ee872bf82b92a00a5ced4dc886cd254323f65bb8c7e491b4cdc55ed20d856983 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 09:13:16 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2045548742; not ready for session (expect reconnect)
Jan 29 09:13:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 09:13:16 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:16 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 09:13:16 compute-0 systemd[1]: Started libpod-conmon-ee872bf82b92a00a5ced4dc886cd254323f65bb8c7e491b4cdc55ed20d856983.scope.
Jan 29 09:13:16 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/293d42803ba62b589c054e9bb8183a76e284f71869549257b53f1856144946e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/293d42803ba62b589c054e9bb8183a76e284f71869549257b53f1856144946e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/293d42803ba62b589c054e9bb8183a76e284f71869549257b53f1856144946e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/293d42803ba62b589c054e9bb8183a76e284f71869549257b53f1856144946e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/293d42803ba62b589c054e9bb8183a76e284f71869549257b53f1856144946e8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:17 compute-0 podman[90172]: 2026-01-29 09:13:17.000946571 +0000 UTC m=+0.398640693 container init ee872bf82b92a00a5ced4dc886cd254323f65bb8c7e491b4cdc55ed20d856983 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_johnson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 29 09:13:17 compute-0 podman[90172]: 2026-01-29 09:13:17.006754123 +0000 UTC m=+0.404448245 container start ee872bf82b92a00a5ced4dc886cd254323f65bb8c7e491b4cdc55ed20d856983 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_johnson, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:13:17 compute-0 podman[90172]: 2026-01-29 09:13:17.038880944 +0000 UTC m=+0.436575096 container attach ee872bf82b92a00a5ced4dc886cd254323f65bb8c7e491b4cdc55ed20d856983 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_johnson, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 09:13:17 compute-0 ceph-osd[88193]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 7.017 iops: 1796.248 elapsed_sec: 1.670
Jan 29 09:13:17 compute-0 ceph-osd[88193]: log_channel(cluster) log [WRN] : OSD bench result of 1796.248288 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 29 09:13:17 compute-0 ceph-osd[88193]: osd.2 0 waiting for initial osdmap
Jan 29 09:13:17 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2[88189]: 2026-01-29T09:13:17.136+0000 7f6e7e31d640 -1 osd.2 0 waiting for initial osdmap
Jan 29 09:13:17 compute-0 ceph-mon[75183]: pgmap v46: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 29 09:13:17 compute-0 ceph-mon[75183]: osdmap e16: 3 total, 2 up, 3 in
Jan 29 09:13:17 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:17 compute-0 ceph-mon[75183]: mgrmap e9: compute-0.ucpkkb(active, since 80s)
Jan 29 09:13:17 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:17 compute-0 ceph-osd[88193]: osd.2 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 29 09:13:17 compute-0 ceph-osd[88193]: osd.2 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 29 09:13:17 compute-0 ceph-osd[88193]: osd.2 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 29 09:13:17 compute-0 ceph-osd[88193]: osd.2 16 check_osdmap_features require_osd_release unknown -> tentacle
Jan 29 09:13:17 compute-0 ceph-osd[88193]: osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 29 09:13:17 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-osd-2[88189]: 2026-01-29T09:13:17.187+0000 7f6e79122640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 29 09:13:17 compute-0 ceph-osd[88193]: osd.2 16 set_numa_affinity not setting numa affinity
Jan 29 09:13:17 compute-0 ceph-osd[88193]: osd.2 16 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Jan 29 09:13:17 compute-0 peaceful_johnson[90188]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:13:17 compute-0 peaceful_johnson[90188]: --> All data devices are unavailable
Jan 29 09:13:17 compute-0 systemd[1]: libpod-ee872bf82b92a00a5ced4dc886cd254323f65bb8c7e491b4cdc55ed20d856983.scope: Deactivated successfully.
Jan 29 09:13:17 compute-0 podman[90172]: 2026-01-29 09:13:17.439384825 +0000 UTC m=+0.837078957 container died ee872bf82b92a00a5ced4dc886cd254323f65bb8c7e491b4cdc55ed20d856983 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 29 09:13:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-293d42803ba62b589c054e9bb8183a76e284f71869549257b53f1856144946e8-merged.mount: Deactivated successfully.
Jan 29 09:13:17 compute-0 podman[90172]: 2026-01-29 09:13:17.6669261 +0000 UTC m=+1.064620222 container remove ee872bf82b92a00a5ced4dc886cd254323f65bb8c7e491b4cdc55ed20d856983 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_johnson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:13:17 compute-0 systemd[1]: libpod-conmon-ee872bf82b92a00a5ced4dc886cd254323f65bb8c7e491b4cdc55ed20d856983.scope: Deactivated successfully.
Jan 29 09:13:17 compute-0 sudo[90080]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:17 compute-0 sudo[90219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:13:17 compute-0 sudo[90219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:13:17 compute-0 sudo[90219]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:17 compute-0 sudo[90244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:13:17 compute-0 sudo[90244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:13:17 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2045548742; not ready for session (expect reconnect)
Jan 29 09:13:17 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 09:13:17 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:17 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 09:13:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 29 09:13:18 compute-0 ceph-osd[88193]: osd.2 16 tick checking mon for new map
Jan 29 09:13:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:13:18 compute-0 podman[90281]: 2026-01-29 09:13:18.134719032 +0000 UTC m=+0.052464224 container create f501d5bbc4ccfdb7e73703f3db21d925125012ccede210df40c89b9ca839d1d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bardeen, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 29 09:13:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 29 09:13:18 compute-0 ceph-mon[75183]: OSD bench result of 1796.248288 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 29 09:13:18 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:18 compute-0 ceph-mon[75183]: pgmap v48: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 29 09:13:18 compute-0 podman[90281]: 2026-01-29 09:13:18.106446382 +0000 UTC m=+0.024191594 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:13:18 compute-0 systemd[1]: Started libpod-conmon-f501d5bbc4ccfdb7e73703f3db21d925125012ccede210df40c89b9ca839d1d2.scope.
Jan 29 09:13:18 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Jan 29 09:13:18 compute-0 ceph-mon[75183]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/2045548742,v1:192.168.122.100:6811/2045548742] boot
Jan 29 09:13:18 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Jan 29 09:13:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 09:13:18 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:18 compute-0 podman[90281]: 2026-01-29 09:13:18.279749838 +0000 UTC m=+0.197495030 container init f501d5bbc4ccfdb7e73703f3db21d925125012ccede210df40c89b9ca839d1d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 29 09:13:18 compute-0 podman[90281]: 2026-01-29 09:13:18.286970897 +0000 UTC m=+0.204716089 container start f501d5bbc4ccfdb7e73703f3db21d925125012ccede210df40c89b9ca839d1d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 09:13:18 compute-0 determined_bardeen[90298]: 167 167
Jan 29 09:13:18 compute-0 systemd[1]: libpod-f501d5bbc4ccfdb7e73703f3db21d925125012ccede210df40c89b9ca839d1d2.scope: Deactivated successfully.
Jan 29 09:13:18 compute-0 ceph-osd[88193]: osd.2 17 state: booting -> active
Jan 29 09:13:18 compute-0 podman[90281]: 2026-01-29 09:13:18.344012359 +0000 UTC m=+0.261757571 container attach f501d5bbc4ccfdb7e73703f3db21d925125012ccede210df40c89b9ca839d1d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bardeen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:13:18 compute-0 podman[90281]: 2026-01-29 09:13:18.344546763 +0000 UTC m=+0.262291955 container died f501d5bbc4ccfdb7e73703f3db21d925125012ccede210df40c89b9ca839d1d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bardeen, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:13:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad695ab0f58b0ed75ef3bc08f9639176b23b9fe4129dd119e4f9c41b42217f46-merged.mount: Deactivated successfully.
Jan 29 09:13:18 compute-0 podman[90281]: 2026-01-29 09:13:18.469337729 +0000 UTC m=+0.387082921 container remove f501d5bbc4ccfdb7e73703f3db21d925125012ccede210df40c89b9ca839d1d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bardeen, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:13:18 compute-0 systemd[1]: libpod-conmon-f501d5bbc4ccfdb7e73703f3db21d925125012ccede210df40c89b9ca839d1d2.scope: Deactivated successfully.
Jan 29 09:13:18 compute-0 podman[90324]: 2026-01-29 09:13:18.580363945 +0000 UTC m=+0.023420834 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:13:18 compute-0 podman[90324]: 2026-01-29 09:13:18.744047909 +0000 UTC m=+0.187104788 container create fb8efd7901903a5f7728940adb2054328d63adf658b5ae11d54913f3fede5561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_napier, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 29 09:13:18 compute-0 systemd[1]: Started libpod-conmon-fb8efd7901903a5f7728940adb2054328d63adf658b5ae11d54913f3fede5561.scope.
Jan 29 09:13:18 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae8f4d0e92760dd8d7a6bccf020de9d9988dbc8480e2a9493466e567a75939b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae8f4d0e92760dd8d7a6bccf020de9d9988dbc8480e2a9493466e567a75939b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae8f4d0e92760dd8d7a6bccf020de9d9988dbc8480e2a9493466e567a75939b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae8f4d0e92760dd8d7a6bccf020de9d9988dbc8480e2a9493466e567a75939b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:19 compute-0 podman[90324]: 2026-01-29 09:13:19.069305021 +0000 UTC m=+0.512361930 container init fb8efd7901903a5f7728940adb2054328d63adf658b5ae11d54913f3fede5561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 09:13:19 compute-0 podman[90324]: 2026-01-29 09:13:19.075912464 +0000 UTC m=+0.518969333 container start fb8efd7901903a5f7728940adb2054328d63adf658b5ae11d54913f3fede5561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_napier, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:13:19 compute-0 podman[90324]: 2026-01-29 09:13:19.375314649 +0000 UTC m=+0.818371518 container attach fb8efd7901903a5f7728940adb2054328d63adf658b5ae11d54913f3fede5561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_napier, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 09:13:19 compute-0 focused_napier[90340]: {
Jan 29 09:13:19 compute-0 focused_napier[90340]:     "0": [
Jan 29 09:13:19 compute-0 focused_napier[90340]:         {
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "devices": [
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "/dev/loop3"
Jan 29 09:13:19 compute-0 focused_napier[90340]:             ],
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "lv_name": "ceph_lv0",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "lv_size": "21470642176",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "name": "ceph_lv0",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "tags": {
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.cluster_name": "ceph",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.crush_device_class": "",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.encrypted": "0",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.objectstore": "bluestore",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.osd_id": "0",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.type": "block",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.vdo": "0",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.with_tpm": "0"
Jan 29 09:13:19 compute-0 focused_napier[90340]:             },
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "type": "block",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "vg_name": "ceph_vg0"
Jan 29 09:13:19 compute-0 focused_napier[90340]:         }
Jan 29 09:13:19 compute-0 focused_napier[90340]:     ],
Jan 29 09:13:19 compute-0 focused_napier[90340]:     "1": [
Jan 29 09:13:19 compute-0 focused_napier[90340]:         {
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "devices": [
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "/dev/loop4"
Jan 29 09:13:19 compute-0 focused_napier[90340]:             ],
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "lv_name": "ceph_lv1",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "lv_size": "21470642176",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "name": "ceph_lv1",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "tags": {
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.cluster_name": "ceph",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.crush_device_class": "",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.encrypted": "0",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.objectstore": "bluestore",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.osd_id": "1",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.type": "block",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.vdo": "0",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.with_tpm": "0"
Jan 29 09:13:19 compute-0 focused_napier[90340]:             },
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "type": "block",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "vg_name": "ceph_vg1"
Jan 29 09:13:19 compute-0 focused_napier[90340]:         }
Jan 29 09:13:19 compute-0 focused_napier[90340]:     ],
Jan 29 09:13:19 compute-0 focused_napier[90340]:     "2": [
Jan 29 09:13:19 compute-0 focused_napier[90340]:         {
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "devices": [
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "/dev/loop5"
Jan 29 09:13:19 compute-0 focused_napier[90340]:             ],
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "lv_name": "ceph_lv2",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "lv_size": "21470642176",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "name": "ceph_lv2",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "tags": {
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.cluster_name": "ceph",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.crush_device_class": "",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.encrypted": "0",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.objectstore": "bluestore",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.osd_id": "2",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.type": "block",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.vdo": "0",
Jan 29 09:13:19 compute-0 focused_napier[90340]:                 "ceph.with_tpm": "0"
Jan 29 09:13:19 compute-0 focused_napier[90340]:             },
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "type": "block",
Jan 29 09:13:19 compute-0 focused_napier[90340]:             "vg_name": "ceph_vg2"
Jan 29 09:13:19 compute-0 focused_napier[90340]:         }
Jan 29 09:13:19 compute-0 focused_napier[90340]:     ]
Jan 29 09:13:19 compute-0 focused_napier[90340]: }
Jan 29 09:13:19 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 29 09:13:19 compute-0 systemd[1]: libpod-fb8efd7901903a5f7728940adb2054328d63adf658b5ae11d54913f3fede5561.scope: Deactivated successfully.
Jan 29 09:13:19 compute-0 podman[90324]: 2026-01-29 09:13:19.44642645 +0000 UTC m=+0.889483329 container died fb8efd7901903a5f7728940adb2054328d63adf658b5ae11d54913f3fede5561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_napier, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:13:19 compute-0 ceph-mon[75183]: osd.2 [v2:192.168.122.100:6810/2045548742,v1:192.168.122.100:6811/2045548742] boot
Jan 29 09:13:19 compute-0 ceph-mon[75183]: osdmap e17: 3 total, 3 up, 3 in
Jan 29 09:13:19 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 09:13:19 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Jan 29 09:13:19 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Jan 29 09:13:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-aae8f4d0e92760dd8d7a6bccf020de9d9988dbc8480e2a9493466e567a75939b-merged.mount: Deactivated successfully.
Jan 29 09:13:19 compute-0 podman[90324]: 2026-01-29 09:13:19.898936993 +0000 UTC m=+1.341993862 container remove fb8efd7901903a5f7728940adb2054328d63adf658b5ae11d54913f3fede5561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_napier, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:13:19 compute-0 systemd[1]: libpod-conmon-fb8efd7901903a5f7728940adb2054328d63adf658b5ae11d54913f3fede5561.scope: Deactivated successfully.
Jan 29 09:13:19 compute-0 sudo[90244]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Jan 29 09:13:20 compute-0 sudo[90362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:13:20 compute-0 sudo[90362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:13:20 compute-0 sudo[90362]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:20 compute-0 sudo[90387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:13:20 compute-0 sudo[90387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:13:20 compute-0 podman[90424]: 2026-01-29 09:13:20.408033696 +0000 UTC m=+0.102995706 container create 44f8f7ee255ec0b6fbad98e46ef4ca338d7392016ff55e640cd5374bdf44bf35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_bartik, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:13:20 compute-0 podman[90424]: 2026-01-29 09:13:20.332360176 +0000 UTC m=+0.027322216 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:13:20 compute-0 systemd[1]: Started libpod-conmon-44f8f7ee255ec0b6fbad98e46ef4ca338d7392016ff55e640cd5374bdf44bf35.scope.
Jan 29 09:13:20 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:20 compute-0 podman[90424]: 2026-01-29 09:13:20.580652784 +0000 UTC m=+0.275614814 container init 44f8f7ee255ec0b6fbad98e46ef4ca338d7392016ff55e640cd5374bdf44bf35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_bartik, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 29 09:13:20 compute-0 podman[90424]: 2026-01-29 09:13:20.586850476 +0000 UTC m=+0.281812496 container start 44f8f7ee255ec0b6fbad98e46ef4ca338d7392016ff55e640cd5374bdf44bf35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_bartik, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 29 09:13:20 compute-0 modest_bartik[90440]: 167 167
Jan 29 09:13:20 compute-0 systemd[1]: libpod-44f8f7ee255ec0b6fbad98e46ef4ca338d7392016ff55e640cd5374bdf44bf35.scope: Deactivated successfully.
Jan 29 09:13:20 compute-0 ceph-mon[75183]: osdmap e18: 3 total, 3 up, 3 in
Jan 29 09:13:20 compute-0 ceph-mon[75183]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Jan 29 09:13:20 compute-0 podman[90424]: 2026-01-29 09:13:20.68747387 +0000 UTC m=+0.382435900 container attach 44f8f7ee255ec0b6fbad98e46ef4ca338d7392016ff55e640cd5374bdf44bf35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:13:20 compute-0 podman[90424]: 2026-01-29 09:13:20.687897471 +0000 UTC m=+0.382859481 container died 44f8f7ee255ec0b6fbad98e46ef4ca338d7392016ff55e640cd5374bdf44bf35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 09:13:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-5efafbbe892350d145dacf146eb00fa582d6201acf8525cf40710a15a95b1da3-merged.mount: Deactivated successfully.
Jan 29 09:13:21 compute-0 podman[90424]: 2026-01-29 09:13:21.104353979 +0000 UTC m=+0.799315989 container remove 44f8f7ee255ec0b6fbad98e46ef4ca338d7392016ff55e640cd5374bdf44bf35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 09:13:21 compute-0 systemd[1]: libpod-conmon-44f8f7ee255ec0b6fbad98e46ef4ca338d7392016ff55e640cd5374bdf44bf35.scope: Deactivated successfully.
Jan 29 09:13:21 compute-0 podman[90464]: 2026-01-29 09:13:21.212883799 +0000 UTC m=+0.021885364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:13:21 compute-0 podman[90464]: 2026-01-29 09:13:21.356799866 +0000 UTC m=+0.165801411 container create 885633effdc95f28e49a748aa0fe39e98c94b1089d4b3fa84bf322f785d2e24d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 29 09:13:21 compute-0 systemd[1]: Started libpod-conmon-885633effdc95f28e49a748aa0fe39e98c94b1089d4b3fa84bf322f785d2e24d.scope.
Jan 29 09:13:21 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/686930cba29729ab6211a33f0c23df909ae04a6d261efe5a0f266aeda8925fd8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/686930cba29729ab6211a33f0c23df909ae04a6d261efe5a0f266aeda8925fd8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/686930cba29729ab6211a33f0c23df909ae04a6d261efe5a0f266aeda8925fd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/686930cba29729ab6211a33f0c23df909ae04a6d261efe5a0f266aeda8925fd8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:21 compute-0 podman[90464]: 2026-01-29 09:13:21.617700834 +0000 UTC m=+0.426702409 container init 885633effdc95f28e49a748aa0fe39e98c94b1089d4b3fa84bf322f785d2e24d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:13:21 compute-0 podman[90464]: 2026-01-29 09:13:21.624891402 +0000 UTC m=+0.433892947 container start 885633effdc95f28e49a748aa0fe39e98c94b1089d4b3fa84bf322f785d2e24d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 29 09:13:21 compute-0 podman[90464]: 2026-01-29 09:13:21.73412272 +0000 UTC m=+0.543124285 container attach 885633effdc95f28e49a748aa0fe39e98c94b1089d4b3fa84bf322f785d2e24d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_almeida, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:13:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Jan 29 09:13:22 compute-0 lvm[90557]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:13:22 compute-0 lvm[90560]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:13:22 compute-0 lvm[90560]: VG ceph_vg1 finished
Jan 29 09:13:22 compute-0 lvm[90557]: VG ceph_vg0 finished
Jan 29 09:13:22 compute-0 lvm[90562]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:13:22 compute-0 lvm[90562]: VG ceph_vg2 finished
Jan 29 09:13:22 compute-0 brave_almeida[90481]: {}
Jan 29 09:13:22 compute-0 systemd[1]: libpod-885633effdc95f28e49a748aa0fe39e98c94b1089d4b3fa84bf322f785d2e24d.scope: Deactivated successfully.
Jan 29 09:13:22 compute-0 systemd[1]: libpod-885633effdc95f28e49a748aa0fe39e98c94b1089d4b3fa84bf322f785d2e24d.scope: Consumed 1.295s CPU time.
Jan 29 09:13:22 compute-0 podman[90464]: 2026-01-29 09:13:22.452379038 +0000 UTC m=+1.261380643 container died 885633effdc95f28e49a748aa0fe39e98c94b1089d4b3fa84bf322f785d2e24d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 29 09:13:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-686930cba29729ab6211a33f0c23df909ae04a6d261efe5a0f266aeda8925fd8-merged.mount: Deactivated successfully.
Jan 29 09:13:22 compute-0 podman[90464]: 2026-01-29 09:13:22.594503677 +0000 UTC m=+1.403505242 container remove 885633effdc95f28e49a748aa0fe39e98c94b1089d4b3fa84bf322f785d2e24d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_almeida, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 09:13:22 compute-0 systemd[1]: libpod-conmon-885633effdc95f28e49a748aa0fe39e98c94b1089d4b3fa84bf322f785d2e24d.scope: Deactivated successfully.
Jan 29 09:13:22 compute-0 sudo[90387]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:13:22 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:13:22 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:22 compute-0 sudo[90578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:13:22 compute-0 sudo[90578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:13:22 compute-0 sudo[90578]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:23 compute-0 ceph-mon[75183]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Jan 29 09:13:23 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:23 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:13:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Jan 29 09:13:24 compute-0 ceph-mon[75183]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Jan 29 09:13:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:13:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:13:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:13:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:13:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:13:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:13:27 compute-0 ceph-mon[75183]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:13:28 compute-0 ceph-mon[75183]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:31 compute-0 ceph-mon[75183]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:31 compute-0 sudo[90626]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ridzzqbnvjxjfukunnkgvjxhkhjxkwah ; /usr/bin/python3'
Jan 29 09:13:31 compute-0 sudo[90626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:13:31 compute-0 python3[90628]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:13:31 compute-0 podman[90630]: 2026-01-29 09:13:31.924057145 +0000 UTC m=+0.056397216 container create 2dfabc8fa4dac24aedf3d2b618c08305c3296a1c68699b21129437baf236be2c (image=quay.io/ceph/ceph:v20, name=affectionate_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:13:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:31 compute-0 systemd[1]: Started libpod-conmon-2dfabc8fa4dac24aedf3d2b618c08305c3296a1c68699b21129437baf236be2c.scope.
Jan 29 09:13:31 compute-0 podman[90630]: 2026-01-29 09:13:31.901499855 +0000 UTC m=+0.033839956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:13:32 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7e27ea18f704e55f502fab0622221590df1ab840b2c85a4c3ce346321c6108b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7e27ea18f704e55f502fab0622221590df1ab840b2c85a4c3ce346321c6108b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7e27ea18f704e55f502fab0622221590df1ab840b2c85a4c3ce346321c6108b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:32 compute-0 podman[90630]: 2026-01-29 09:13:32.090617034 +0000 UTC m=+0.222957145 container init 2dfabc8fa4dac24aedf3d2b618c08305c3296a1c68699b21129437baf236be2c (image=quay.io/ceph/ceph:v20, name=affectionate_jepsen, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 09:13:32 compute-0 podman[90630]: 2026-01-29 09:13:32.09806773 +0000 UTC m=+0.230407811 container start 2dfabc8fa4dac24aedf3d2b618c08305c3296a1c68699b21129437baf236be2c (image=quay.io/ceph/ceph:v20, name=affectionate_jepsen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 29 09:13:32 compute-0 podman[90630]: 2026-01-29 09:13:32.127001047 +0000 UTC m=+0.259341158 container attach 2dfabc8fa4dac24aedf3d2b618c08305c3296a1c68699b21129437baf236be2c (image=quay.io/ceph/ceph:v20, name=affectionate_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 29 09:13:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 29 09:13:32 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1558245006' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 29 09:13:32 compute-0 affectionate_jepsen[90646]: 
Jan 29 09:13:32 compute-0 affectionate_jepsen[90646]: {"fsid":"3fdce3ca-565d-5459-88e8-1ffe58b48437","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":114,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":18,"num_osds":3,"num_up_osds":3,"osd_up_since":1769677998,"num_in_osds":3,"osd_in_since":1769677957,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":502849536,"bytes_avail":63909076992,"bytes_total":64411926528},"fsmap":{"epoch":1,"btime":"2026-01-29T09:11:36:099108+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-29T09:12:57.948210+0000","services":{}},"progress_events":{}}
Jan 29 09:13:32 compute-0 systemd[1]: libpod-2dfabc8fa4dac24aedf3d2b618c08305c3296a1c68699b21129437baf236be2c.scope: Deactivated successfully.
Jan 29 09:13:32 compute-0 podman[90671]: 2026-01-29 09:13:32.814889849 +0000 UTC m=+0.039792322 container died 2dfabc8fa4dac24aedf3d2b618c08305c3296a1c68699b21129437baf236be2c (image=quay.io/ceph/ceph:v20, name=affectionate_jepsen, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 09:13:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7e27ea18f704e55f502fab0622221590df1ab840b2c85a4c3ce346321c6108b-merged.mount: Deactivated successfully.
Jan 29 09:13:32 compute-0 podman[90671]: 2026-01-29 09:13:32.916861618 +0000 UTC m=+0.141764071 container remove 2dfabc8fa4dac24aedf3d2b618c08305c3296a1c68699b21129437baf236be2c (image=quay.io/ceph/ceph:v20, name=affectionate_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:13:32 compute-0 systemd[1]: libpod-conmon-2dfabc8fa4dac24aedf3d2b618c08305c3296a1c68699b21129437baf236be2c.scope: Deactivated successfully.
Jan 29 09:13:32 compute-0 sudo[90626]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:33 compute-0 ceph-mon[75183]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:33 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1558245006' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 29 09:13:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:13:33 compute-0 sudo[90708]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjcebqbqohxufajanfswjxfqakqoguuo ; /usr/bin/python3'
Jan 29 09:13:33 compute-0 sudo[90708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:13:33 compute-0 python3[90710]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:13:33 compute-0 podman[90711]: 2026-01-29 09:13:33.46412165 +0000 UTC m=+0.024685847 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:13:33 compute-0 podman[90711]: 2026-01-29 09:13:33.601583308 +0000 UTC m=+0.162147475 container create f1bae5cf1820f6cce1b64090cae1623ed53c03711d392feee9325be6dbfa82e0 (image=quay.io/ceph/ceph:v20, name=sleepy_ardinghelli, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:13:33 compute-0 systemd[1]: Started libpod-conmon-f1bae5cf1820f6cce1b64090cae1623ed53c03711d392feee9325be6dbfa82e0.scope.
Jan 29 09:13:33 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c13202f181725b8f9bc0011921e6832af1fa1619c7bbe1b91d229db3e47adc9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c13202f181725b8f9bc0011921e6832af1fa1619c7bbe1b91d229db3e47adc9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:33 compute-0 podman[90711]: 2026-01-29 09:13:33.769663517 +0000 UTC m=+0.330227684 container init f1bae5cf1820f6cce1b64090cae1623ed53c03711d392feee9325be6dbfa82e0 (image=quay.io/ceph/ceph:v20, name=sleepy_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 09:13:33 compute-0 podman[90711]: 2026-01-29 09:13:33.779010341 +0000 UTC m=+0.339574508 container start f1bae5cf1820f6cce1b64090cae1623ed53c03711d392feee9325be6dbfa82e0 (image=quay.io/ceph/ceph:v20, name=sleepy_ardinghelli, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 29 09:13:33 compute-0 podman[90711]: 2026-01-29 09:13:33.845968454 +0000 UTC m=+0.406532651 container attach f1bae5cf1820f6cce1b64090cae1623ed53c03711d392feee9325be6dbfa82e0 (image=quay.io/ceph/ceph:v20, name=sleepy_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 29 09:13:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:34 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 29 09:13:34 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/647038274' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 09:13:34 compute-0 ceph-mon[75183]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:35 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 29 09:13:35 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/647038274' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 09:13:35 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Jan 29 09:13:35 compute-0 sleepy_ardinghelli[90726]: pool 'vms' created
Jan 29 09:13:35 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Jan 29 09:13:35 compute-0 systemd[1]: libpod-f1bae5cf1820f6cce1b64090cae1623ed53c03711d392feee9325be6dbfa82e0.scope: Deactivated successfully.
Jan 29 09:13:35 compute-0 podman[90711]: 2026-01-29 09:13:35.276492831 +0000 UTC m=+1.837057008 container died f1bae5cf1820f6cce1b64090cae1623ed53c03711d392feee9325be6dbfa82e0 (image=quay.io/ceph/ceph:v20, name=sleepy_ardinghelli, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 09:13:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c13202f181725b8f9bc0011921e6832af1fa1619c7bbe1b91d229db3e47adc9-merged.mount: Deactivated successfully.
Jan 29 09:13:35 compute-0 podman[90711]: 2026-01-29 09:13:35.414576824 +0000 UTC m=+1.975141001 container remove f1bae5cf1820f6cce1b64090cae1623ed53c03711d392feee9325be6dbfa82e0 (image=quay.io/ceph/ceph:v20, name=sleepy_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:13:35 compute-0 systemd[1]: libpod-conmon-f1bae5cf1820f6cce1b64090cae1623ed53c03711d392feee9325be6dbfa82e0.scope: Deactivated successfully.
Jan 29 09:13:35 compute-0 sudo[90708]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:35 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/647038274' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 09:13:35 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/647038274' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 09:13:35 compute-0 ceph-mon[75183]: osdmap e19: 3 total, 3 up, 3 in
Jan 29 09:13:35 compute-0 sudo[90788]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmvaqdwfxodnheobjxufzoidfsftiwwy ; /usr/bin/python3'
Jan 29 09:13:35 compute-0 sudo[90788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:13:35 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:35 compute-0 python3[90790]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:13:35 compute-0 podman[90791]: 2026-01-29 09:13:35.760499227 +0000 UTC m=+0.023181147 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:13:35 compute-0 podman[90791]: 2026-01-29 09:13:35.887516342 +0000 UTC m=+0.150198242 container create cbddae4920f0aa539c3c840a8cf7cb2a790007e47ed2b29b20000a783b4f6bd7 (image=quay.io/ceph/ceph:v20, name=priceless_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 29 09:13:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v60: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:36 compute-0 systemd[1]: Started libpod-conmon-cbddae4920f0aa539c3c840a8cf7cb2a790007e47ed2b29b20000a783b4f6bd7.scope.
Jan 29 09:13:36 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf6fc0201d3943f309c90678e97e6e044ac85523a15393185f53f89834ae14f4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf6fc0201d3943f309c90678e97e6e044ac85523a15393185f53f89834ae14f4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:36 compute-0 podman[90791]: 2026-01-29 09:13:36.168362061 +0000 UTC m=+0.431043981 container init cbddae4920f0aa539c3c840a8cf7cb2a790007e47ed2b29b20000a783b4f6bd7 (image=quay.io/ceph/ceph:v20, name=priceless_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 09:13:36 compute-0 podman[90791]: 2026-01-29 09:13:36.174341668 +0000 UTC m=+0.437023568 container start cbddae4920f0aa539c3c840a8cf7cb2a790007e47ed2b29b20000a783b4f6bd7 (image=quay.io/ceph/ceph:v20, name=priceless_dirac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 29 09:13:36 compute-0 podman[90791]: 2026-01-29 09:13:36.234697477 +0000 UTC m=+0.497379417 container attach cbddae4920f0aa539c3c840a8cf7cb2a790007e47ed2b29b20000a783b4f6bd7 (image=quay.io/ceph/ceph:v20, name=priceless_dirac, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:13:36 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 29 09:13:36 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Jan 29 09:13:36 compute-0 ceph-mon[75183]: pgmap v60: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:36 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Jan 29 09:13:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:36 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 29 09:13:36 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/952011564' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 09:13:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 29 09:13:37 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/952011564' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 09:13:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Jan 29 09:13:37 compute-0 priceless_dirac[90807]: pool 'volumes' created
Jan 29 09:13:37 compute-0 systemd[1]: libpod-cbddae4920f0aa539c3c840a8cf7cb2a790007e47ed2b29b20000a783b4f6bd7.scope: Deactivated successfully.
Jan 29 09:13:37 compute-0 podman[90834]: 2026-01-29 09:13:37.781231361 +0000 UTC m=+0.025596781 container died cbddae4920f0aa539c3c840a8cf7cb2a790007e47ed2b29b20000a783b4f6bd7 (image=quay.io/ceph/ceph:v20, name=priceless_dirac, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 29 09:13:37 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Jan 29 09:13:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v63: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:38 compute-0 ceph-mon[75183]: osdmap e20: 3 total, 3 up, 3 in
Jan 29 09:13:38 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/952011564' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 09:13:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e21 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:13:38 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf6fc0201d3943f309c90678e97e6e044ac85523a15393185f53f89834ae14f4-merged.mount: Deactivated successfully.
Jan 29 09:13:38 compute-0 podman[90834]: 2026-01-29 09:13:38.943397865 +0000 UTC m=+1.187763255 container remove cbddae4920f0aa539c3c840a8cf7cb2a790007e47ed2b29b20000a783b4f6bd7 (image=quay.io/ceph/ceph:v20, name=priceless_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 29 09:13:38 compute-0 systemd[1]: libpod-conmon-cbddae4920f0aa539c3c840a8cf7cb2a790007e47ed2b29b20000a783b4f6bd7.scope: Deactivated successfully.
Jan 29 09:13:38 compute-0 sudo[90788]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 29 09:13:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Jan 29 09:13:39 compute-0 sudo[90873]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pibtletpxamlqcsyceirikxnoggvanhk ; /usr/bin/python3'
Jan 29 09:13:39 compute-0 sudo[90873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:13:39 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Jan 29 09:13:39 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 22 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:39 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/952011564' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 09:13:39 compute-0 ceph-mon[75183]: osdmap e21: 3 total, 3 up, 3 in
Jan 29 09:13:39 compute-0 ceph-mon[75183]: pgmap v63: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:39 compute-0 python3[90875]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:13:39 compute-0 podman[90876]: 2026-01-29 09:13:39.355799857 +0000 UTC m=+0.072317693 container create 6a905745295d52cfc40a72126fb95750320702e68561d5aff0041b9fb37fe583 (image=quay.io/ceph/ceph:v20, name=awesome_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 09:13:39 compute-0 podman[90876]: 2026-01-29 09:13:39.317872255 +0000 UTC m=+0.034390121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:13:39 compute-0 systemd[1]: Started libpod-conmon-6a905745295d52cfc40a72126fb95750320702e68561d5aff0041b9fb37fe583.scope.
Jan 29 09:13:39 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8785535c005e47dbdb893c24d8bf5e3cd1ff2618b9a6ec04e659ce2d15990dbb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8785535c005e47dbdb893c24d8bf5e3cd1ff2618b9a6ec04e659ce2d15990dbb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:39 compute-0 podman[90876]: 2026-01-29 09:13:39.483270123 +0000 UTC m=+0.199787989 container init 6a905745295d52cfc40a72126fb95750320702e68561d5aff0041b9fb37fe583 (image=quay.io/ceph/ceph:v20, name=awesome_golick, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 29 09:13:39 compute-0 podman[90876]: 2026-01-29 09:13:39.489075685 +0000 UTC m=+0.205593521 container start 6a905745295d52cfc40a72126fb95750320702e68561d5aff0041b9fb37fe583 (image=quay.io/ceph/ceph:v20, name=awesome_golick, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:13:39 compute-0 podman[90876]: 2026-01-29 09:13:39.521022421 +0000 UTC m=+0.237540287 container attach 6a905745295d52cfc40a72126fb95750320702e68561d5aff0041b9fb37fe583 (image=quay.io/ceph/ceph:v20, name=awesome_golick, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:13:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v65: 3 pgs: 3 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:40 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 29 09:13:40 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1606709178' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 09:13:40 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 29 09:13:40 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1606709178' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 09:13:40 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Jan 29 09:13:40 compute-0 awesome_golick[90891]: pool 'backups' created
Jan 29 09:13:40 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Jan 29 09:13:40 compute-0 systemd[1]: libpod-6a905745295d52cfc40a72126fb95750320702e68561d5aff0041b9fb37fe583.scope: Deactivated successfully.
Jan 29 09:13:40 compute-0 podman[90876]: 2026-01-29 09:13:40.206696986 +0000 UTC m=+0.923214852 container died 6a905745295d52cfc40a72126fb95750320702e68561d5aff0041b9fb37fe583 (image=quay.io/ceph/ceph:v20, name=awesome_golick, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 09:13:40 compute-0 ceph-mon[75183]: osdmap e22: 3 total, 3 up, 3 in
Jan 29 09:13:40 compute-0 ceph-mon[75183]: pgmap v65: 3 pgs: 3 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:40 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1606709178' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 09:13:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-8785535c005e47dbdb893c24d8bf5e3cd1ff2618b9a6ec04e659ce2d15990dbb-merged.mount: Deactivated successfully.
Jan 29 09:13:40 compute-0 podman[90876]: 2026-01-29 09:13:40.584648208 +0000 UTC m=+1.301166054 container remove 6a905745295d52cfc40a72126fb95750320702e68561d5aff0041b9fb37fe583 (image=quay.io/ceph/ceph:v20, name=awesome_golick, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 09:13:40 compute-0 sudo[90873]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:40 compute-0 systemd[1]: libpod-conmon-6a905745295d52cfc40a72126fb95750320702e68561d5aff0041b9fb37fe583.scope: Deactivated successfully.
Jan 29 09:13:40 compute-0 sudo[90953]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzshfioroslmwohlnmmxnqfttijskzsa ; /usr/bin/python3'
Jan 29 09:13:40 compute-0 sudo[90953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:13:40 compute-0 python3[90955]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:13:40 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:40 compute-0 podman[90956]: 2026-01-29 09:13:40.966426839 +0000 UTC m=+0.059802716 container create 4f6f8442eb8f46b22501508a7a6681ed42b7c26b5d24500c33f5ef5a2949dab1 (image=quay.io/ceph/ceph:v20, name=objective_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 09:13:41 compute-0 systemd[1]: Started libpod-conmon-4f6f8442eb8f46b22501508a7a6681ed42b7c26b5d24500c33f5ef5a2949dab1.scope.
Jan 29 09:13:41 compute-0 podman[90956]: 2026-01-29 09:13:40.933740524 +0000 UTC m=+0.027116421 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:13:41 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35da3b2c0dd41a7436238eb57543097379e14b63c6bfbf123e303fa8b89904b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35da3b2c0dd41a7436238eb57543097379e14b63c6bfbf123e303fa8b89904b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:41 compute-0 podman[90956]: 2026-01-29 09:13:41.073858451 +0000 UTC m=+0.167234348 container init 4f6f8442eb8f46b22501508a7a6681ed42b7c26b5d24500c33f5ef5a2949dab1 (image=quay.io/ceph/ceph:v20, name=objective_yalow, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:13:41 compute-0 podman[90956]: 2026-01-29 09:13:41.079537389 +0000 UTC m=+0.172913266 container start 4f6f8442eb8f46b22501508a7a6681ed42b7c26b5d24500c33f5ef5a2949dab1 (image=quay.io/ceph/ceph:v20, name=objective_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:13:41 compute-0 podman[90956]: 2026-01-29 09:13:41.09980197 +0000 UTC m=+0.193177877 container attach 4f6f8442eb8f46b22501508a7a6681ed42b7c26b5d24500c33f5ef5a2949dab1 (image=quay.io/ceph/ceph:v20, name=objective_yalow, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:13:41 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 29 09:13:41 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1606709178' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 09:13:41 compute-0 ceph-mon[75183]: osdmap e23: 3 total, 3 up, 3 in
Jan 29 09:13:41 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Jan 29 09:13:41 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Jan 29 09:13:41 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 24 pg[4.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:41 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 29 09:13:41 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2286654678' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 09:13:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v68: 4 pgs: 1 unknown, 3 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 29 09:13:42 compute-0 ceph-mon[75183]: osdmap e24: 3 total, 3 up, 3 in
Jan 29 09:13:42 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2286654678' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 09:13:42 compute-0 ceph-mon[75183]: pgmap v68: 4 pgs: 1 unknown, 3 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:42 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2286654678' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 09:13:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Jan 29 09:13:42 compute-0 objective_yalow[90972]: pool 'images' created
Jan 29 09:13:42 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Jan 29 09:13:42 compute-0 systemd[1]: libpod-4f6f8442eb8f46b22501508a7a6681ed42b7c26b5d24500c33f5ef5a2949dab1.scope: Deactivated successfully.
Jan 29 09:13:42 compute-0 podman[90956]: 2026-01-29 09:13:42.497686093 +0000 UTC m=+1.591061980 container died 4f6f8442eb8f46b22501508a7a6681ed42b7c26b5d24500c33f5ef5a2949dab1 (image=quay.io/ceph/ceph:v20, name=objective_yalow, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 29 09:13:42 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 25 pg[5.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [2] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b35da3b2c0dd41a7436238eb57543097379e14b63c6bfbf123e303fa8b89904b-merged.mount: Deactivated successfully.
Jan 29 09:13:43 compute-0 podman[90956]: 2026-01-29 09:13:43.111693483 +0000 UTC m=+2.205069360 container remove 4f6f8442eb8f46b22501508a7a6681ed42b7c26b5d24500c33f5ef5a2949dab1 (image=quay.io/ceph/ceph:v20, name=objective_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 29 09:13:43 compute-0 sudo[90953]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:43 compute-0 systemd[1]: libpod-conmon-4f6f8442eb8f46b22501508a7a6681ed42b7c26b5d24500c33f5ef5a2949dab1.scope: Deactivated successfully.
Jan 29 09:13:43 compute-0 sudo[91033]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szuxpipqaioieslwduswjabrcgskiylq ; /usr/bin/python3'
Jan 29 09:13:43 compute-0 sudo[91033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:13:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:13:43 compute-0 python3[91035]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:13:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 29 09:13:43 compute-0 podman[91036]: 2026-01-29 09:13:43.484984332 +0000 UTC m=+0.024379379 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:13:43 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2286654678' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 09:13:43 compute-0 ceph-mon[75183]: osdmap e25: 3 total, 3 up, 3 in
Jan 29 09:13:43 compute-0 podman[91036]: 2026-01-29 09:13:43.586488749 +0000 UTC m=+0.125883766 container create df632b06a9978de5eeb7c95fc89a5b66115d6e953e827f256d6a84795879f249 (image=quay.io/ceph/ceph:v20, name=relaxed_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 09:13:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Jan 29 09:13:43 compute-0 systemd[1]: Started libpod-conmon-df632b06a9978de5eeb7c95fc89a5b66115d6e953e827f256d6a84795879f249.scope.
Jan 29 09:13:43 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Jan 29 09:13:43 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d5256d06fd3bbd65d9ed73bcd3ba00f701108a4fd8d9274f33a6150e757aa1c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d5256d06fd3bbd65d9ed73bcd3ba00f701108a4fd8d9274f33a6150e757aa1c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:43 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 26 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [2] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:43 compute-0 podman[91036]: 2026-01-29 09:13:43.904995535 +0000 UTC m=+0.444390582 container init df632b06a9978de5eeb7c95fc89a5b66115d6e953e827f256d6a84795879f249 (image=quay.io/ceph/ceph:v20, name=relaxed_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:13:43 compute-0 podman[91036]: 2026-01-29 09:13:43.911820623 +0000 UTC m=+0.451215650 container start df632b06a9978de5eeb7c95fc89a5b66115d6e953e827f256d6a84795879f249 (image=quay.io/ceph/ceph:v20, name=relaxed_pascal, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 29 09:13:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v71: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:44 compute-0 podman[91036]: 2026-01-29 09:13:44.085438647 +0000 UTC m=+0.624833694 container attach df632b06a9978de5eeb7c95fc89a5b66115d6e953e827f256d6a84795879f249 (image=quay.io/ceph/ceph:v20, name=relaxed_pascal, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:13:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 29 09:13:44 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2605024822' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 09:13:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 29 09:13:44 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2605024822' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 09:13:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Jan 29 09:13:44 compute-0 relaxed_pascal[91051]: pool 'cephfs.cephfs.meta' created
Jan 29 09:13:44 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Jan 29 09:13:44 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 27 pg[6.0( empty local-lis/les=0/0 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [0] r=0 lpr=27 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:44 compute-0 systemd[1]: libpod-df632b06a9978de5eeb7c95fc89a5b66115d6e953e827f256d6a84795879f249.scope: Deactivated successfully.
Jan 29 09:13:44 compute-0 podman[91036]: 2026-01-29 09:13:44.717638062 +0000 UTC m=+1.257033079 container died df632b06a9978de5eeb7c95fc89a5b66115d6e953e827f256d6a84795879f249 (image=quay.io/ceph/ceph:v20, name=relaxed_pascal, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 29 09:13:44 compute-0 ceph-mon[75183]: osdmap e26: 3 total, 3 up, 3 in
Jan 29 09:13:44 compute-0 ceph-mon[75183]: pgmap v71: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:44 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2605024822' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 09:13:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d5256d06fd3bbd65d9ed73bcd3ba00f701108a4fd8d9274f33a6150e757aa1c-merged.mount: Deactivated successfully.
Jan 29 09:13:45 compute-0 podman[91036]: 2026-01-29 09:13:45.041818436 +0000 UTC m=+1.581213453 container remove df632b06a9978de5eeb7c95fc89a5b66115d6e953e827f256d6a84795879f249 (image=quay.io/ceph/ceph:v20, name=relaxed_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:13:45 compute-0 systemd[1]: libpod-conmon-df632b06a9978de5eeb7c95fc89a5b66115d6e953e827f256d6a84795879f249.scope: Deactivated successfully.
Jan 29 09:13:45 compute-0 sudo[91033]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:45 compute-0 sudo[91113]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvzcfsqhepsypfafgjkzjdkvguaimnou ; /usr/bin/python3'
Jan 29 09:13:45 compute-0 sudo[91113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:13:45 compute-0 python3[91115]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:13:45 compute-0 podman[91116]: 2026-01-29 09:13:45.479798758 +0000 UTC m=+0.087419799 container create ff500f29db6215131f62b38f4fec90396a6ec304ff64b5512b5c4d055fd122cb (image=quay.io/ceph/ceph:v20, name=nice_germain, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 29 09:13:45 compute-0 podman[91116]: 2026-01-29 09:13:45.415178577 +0000 UTC m=+0.022799628 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:13:45 compute-0 systemd[1]: Started libpod-conmon-ff500f29db6215131f62b38f4fec90396a6ec304ff64b5512b5c4d055fd122cb.scope.
Jan 29 09:13:45 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c9eb41743d18672d2d964b0ac938b336a4225ffac02d2387921143e58fb1ab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c9eb41743d18672d2d964b0ac938b336a4225ffac02d2387921143e58fb1ab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:45 compute-0 podman[91116]: 2026-01-29 09:13:45.574320532 +0000 UTC m=+0.181941593 container init ff500f29db6215131f62b38f4fec90396a6ec304ff64b5512b5c4d055fd122cb (image=quay.io/ceph/ceph:v20, name=nice_germain, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:13:45 compute-0 podman[91116]: 2026-01-29 09:13:45.579719833 +0000 UTC m=+0.187340874 container start ff500f29db6215131f62b38f4fec90396a6ec304ff64b5512b5c4d055fd122cb (image=quay.io/ceph/ceph:v20, name=nice_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:13:45 compute-0 podman[91116]: 2026-01-29 09:13:45.58418426 +0000 UTC m=+0.191805361 container attach ff500f29db6215131f62b38f4fec90396a6ec304ff64b5512b5c4d055fd122cb (image=quay.io/ceph/ceph:v20, name=nice_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:13:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 29 09:13:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Jan 29 09:13:45 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Jan 29 09:13:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 28 pg[6.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [0] r=0 lpr=27 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:45 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2605024822' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 09:13:45 compute-0 ceph-mon[75183]: osdmap e27: 3 total, 3 up, 3 in
Jan 29 09:13:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v74: 6 pgs: 1 unknown, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:46 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 29 09:13:46 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/94850690' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 09:13:46 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 29 09:13:46 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/94850690' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 09:13:46 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Jan 29 09:13:46 compute-0 nice_germain[91131]: pool 'cephfs.cephfs.data' created
Jan 29 09:13:46 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Jan 29 09:13:46 compute-0 systemd[1]: libpod-ff500f29db6215131f62b38f4fec90396a6ec304ff64b5512b5c4d055fd122cb.scope: Deactivated successfully.
Jan 29 09:13:46 compute-0 podman[91116]: 2026-01-29 09:13:46.820533546 +0000 UTC m=+1.428154587 container died ff500f29db6215131f62b38f4fec90396a6ec304ff64b5512b5c4d055fd122cb (image=quay.io/ceph/ceph:v20, name=nice_germain, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:13:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 29 pg[7.0( empty local-lis/les=0/0 n=0 ec=29/29 lis/c=0/0 les/c/f=0/0/0 sis=29) [1] r=0 lpr=29 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1c9eb41743d18672d2d964b0ac938b336a4225ffac02d2387921143e58fb1ab-merged.mount: Deactivated successfully.
Jan 29 09:13:46 compute-0 ceph-mon[75183]: osdmap e28: 3 total, 3 up, 3 in
Jan 29 09:13:46 compute-0 ceph-mon[75183]: pgmap v74: 6 pgs: 1 unknown, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:46 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/94850690' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 09:13:46 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/94850690' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 09:13:46 compute-0 ceph-mon[75183]: osdmap e29: 3 total, 3 up, 3 in
Jan 29 09:13:46 compute-0 podman[91116]: 2026-01-29 09:13:46.981838927 +0000 UTC m=+1.589460008 container remove ff500f29db6215131f62b38f4fec90396a6ec304ff64b5512b5c4d055fd122cb (image=quay.io/ceph/ceph:v20, name=nice_germain, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:13:46 compute-0 systemd[1]: libpod-conmon-ff500f29db6215131f62b38f4fec90396a6ec304ff64b5512b5c4d055fd122cb.scope: Deactivated successfully.
Jan 29 09:13:47 compute-0 sudo[91113]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:47 compute-0 sudo[91196]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtpdouwshfksdkuvsjjghlnzloonlofc ; /usr/bin/python3'
Jan 29 09:13:47 compute-0 sudo[91196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:13:47 compute-0 python3[91198]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:13:47 compute-0 podman[91199]: 2026-01-29 09:13:47.450963724 +0000 UTC m=+0.118797950 container create e916399f39c0c1184e8f94046904826301beb6f9a600dbbb12b9640d63d41667 (image=quay.io/ceph/ceph:v20, name=adoring_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 09:13:47 compute-0 podman[91199]: 2026-01-29 09:13:47.356568294 +0000 UTC m=+0.024402530 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:13:47 compute-0 systemd[1]: Started libpod-conmon-e916399f39c0c1184e8f94046904826301beb6f9a600dbbb12b9640d63d41667.scope.
Jan 29 09:13:47 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ed108ab99963bd83d005a27c660a756f53b6d598ef778cf2eb8c38e04e323e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ed108ab99963bd83d005a27c660a756f53b6d598ef778cf2eb8c38e04e323e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:47 compute-0 podman[91199]: 2026-01-29 09:13:47.547772998 +0000 UTC m=+0.215607254 container init e916399f39c0c1184e8f94046904826301beb6f9a600dbbb12b9640d63d41667 (image=quay.io/ceph/ceph:v20, name=adoring_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 29 09:13:47 compute-0 podman[91199]: 2026-01-29 09:13:47.553241681 +0000 UTC m=+0.221075907 container start e916399f39c0c1184e8f94046904826301beb6f9a600dbbb12b9640d63d41667 (image=quay.io/ceph/ceph:v20, name=adoring_cori, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:13:47 compute-0 podman[91199]: 2026-01-29 09:13:47.561815635 +0000 UTC m=+0.229649881 container attach e916399f39c0c1184e8f94046904826301beb6f9a600dbbb12b9640d63d41667 (image=quay.io/ceph/ceph:v20, name=adoring_cori, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 09:13:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 29 09:13:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Jan 29 09:13:47 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Jan 29 09:13:47 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 30 pg[7.0( empty local-lis/les=29/30 n=0 ec=29/29 lis/c=0/0 les/c/f=0/0/0 sis=29) [1] r=0 lpr=29 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 2 unknown, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Jan 29 09:13:47 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4216235891' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 29 09:13:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:13:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 29 09:13:48 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4216235891' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 29 09:13:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Jan 29 09:13:48 compute-0 adoring_cori[91214]: enabled application 'rbd' on pool 'vms'
Jan 29 09:13:48 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Jan 29 09:13:48 compute-0 ceph-mon[75183]: osdmap e30: 3 total, 3 up, 3 in
Jan 29 09:13:48 compute-0 ceph-mon[75183]: pgmap v77: 7 pgs: 2 unknown, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:48 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/4216235891' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 29 09:13:48 compute-0 systemd[1]: libpod-e916399f39c0c1184e8f94046904826301beb6f9a600dbbb12b9640d63d41667.scope: Deactivated successfully.
Jan 29 09:13:48 compute-0 podman[91199]: 2026-01-29 09:13:48.83477895 +0000 UTC m=+1.502613186 container died e916399f39c0c1184e8f94046904826301beb6f9a600dbbb12b9640d63d41667 (image=quay.io/ceph/ceph:v20, name=adoring_cori, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5ed108ab99963bd83d005a27c660a756f53b6d598ef778cf2eb8c38e04e323e-merged.mount: Deactivated successfully.
Jan 29 09:13:48 compute-0 podman[91199]: 2026-01-29 09:13:48.883039353 +0000 UTC m=+1.550873579 container remove e916399f39c0c1184e8f94046904826301beb6f9a600dbbb12b9640d63d41667 (image=quay.io/ceph/ceph:v20, name=adoring_cori, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 29 09:13:48 compute-0 systemd[1]: libpod-conmon-e916399f39c0c1184e8f94046904826301beb6f9a600dbbb12b9640d63d41667.scope: Deactivated successfully.
Jan 29 09:13:48 compute-0 sudo[91196]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:49 compute-0 sudo[91275]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixarainsxownhwtddjvqmmilvtdsmmyz ; /usr/bin/python3'
Jan 29 09:13:49 compute-0 sudo[91275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:13:49 compute-0 python3[91277]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:13:49 compute-0 podman[91278]: 2026-01-29 09:13:49.254506495 +0000 UTC m=+0.040606794 container create 11a79a24057a6e39a09dc991aa55207f78fb3833e9e028553437ce3a346077b0 (image=quay.io/ceph/ceph:v20, name=epic_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 29 09:13:49 compute-0 systemd[1]: Started libpod-conmon-11a79a24057a6e39a09dc991aa55207f78fb3833e9e028553437ce3a346077b0.scope.
Jan 29 09:13:49 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13d663b89d11e603cea9e746a18b2d1c68c34a467ecba468bc812ee091db0eb7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13d663b89d11e603cea9e746a18b2d1c68c34a467ecba468bc812ee091db0eb7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:49 compute-0 podman[91278]: 2026-01-29 09:13:49.316066136 +0000 UTC m=+0.102166465 container init 11a79a24057a6e39a09dc991aa55207f78fb3833e9e028553437ce3a346077b0 (image=quay.io/ceph/ceph:v20, name=epic_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 29 09:13:49 compute-0 podman[91278]: 2026-01-29 09:13:49.321686703 +0000 UTC m=+0.107787002 container start 11a79a24057a6e39a09dc991aa55207f78fb3833e9e028553437ce3a346077b0 (image=quay.io/ceph/ceph:v20, name=epic_proskuriakova, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True)
Jan 29 09:13:49 compute-0 podman[91278]: 2026-01-29 09:13:49.32578085 +0000 UTC m=+0.111881179 container attach 11a79a24057a6e39a09dc991aa55207f78fb3833e9e028553437ce3a346077b0 (image=quay.io/ceph/ceph:v20, name=epic_proskuriakova, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:13:49 compute-0 podman[91278]: 2026-01-29 09:13:49.237408087 +0000 UTC m=+0.023508406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:13:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Jan 29 09:13:49 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2327285045' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 29 09:13:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 29 09:13:49 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/4216235891' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 29 09:13:49 compute-0 ceph-mon[75183]: osdmap e31: 3 total, 3 up, 3 in
Jan 29 09:13:49 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2327285045' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 29 09:13:49 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2327285045' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 29 09:13:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Jan 29 09:13:49 compute-0 epic_proskuriakova[91294]: enabled application 'rbd' on pool 'volumes'
Jan 29 09:13:49 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Jan 29 09:13:49 compute-0 systemd[1]: libpod-11a79a24057a6e39a09dc991aa55207f78fb3833e9e028553437ce3a346077b0.scope: Deactivated successfully.
Jan 29 09:13:49 compute-0 podman[91278]: 2026-01-29 09:13:49.867840715 +0000 UTC m=+0.653941014 container died 11a79a24057a6e39a09dc991aa55207f78fb3833e9e028553437ce3a346077b0 (image=quay.io/ceph/ceph:v20, name=epic_proskuriakova, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 29 09:13:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-13d663b89d11e603cea9e746a18b2d1c68c34a467ecba468bc812ee091db0eb7-merged.mount: Deactivated successfully.
Jan 29 09:13:49 compute-0 podman[91278]: 2026-01-29 09:13:49.911725564 +0000 UTC m=+0.697825863 container remove 11a79a24057a6e39a09dc991aa55207f78fb3833e9e028553437ce3a346077b0 (image=quay.io/ceph/ceph:v20, name=epic_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:13:49 compute-0 systemd[1]: libpod-conmon-11a79a24057a6e39a09dc991aa55207f78fb3833e9e028553437ce3a346077b0.scope: Deactivated successfully.
Jan 29 09:13:49 compute-0 sudo[91275]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:50 compute-0 sudo[91354]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urogsmqmamlvtcxvulmbugwwljbpwehf ; /usr/bin/python3'
Jan 29 09:13:50 compute-0 sudo[91354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:13:50 compute-0 python3[91356]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:13:50 compute-0 podman[91357]: 2026-01-29 09:13:50.223597056 +0000 UTC m=+0.040042009 container create 3f9aec4984e361847f5010a9df78a2d419c474c5b9435e18f05b54c1214a5a3c (image=quay.io/ceph/ceph:v20, name=lucid_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 29 09:13:50 compute-0 systemd[1]: Started libpod-conmon-3f9aec4984e361847f5010a9df78a2d419c474c5b9435e18f05b54c1214a5a3c.scope.
Jan 29 09:13:50 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b44939a1d06cbb7d5efec02a3eb6311735e85d96a5b6a384683c47e693453409/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b44939a1d06cbb7d5efec02a3eb6311735e85d96a5b6a384683c47e693453409/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:50 compute-0 podman[91357]: 2026-01-29 09:13:50.205635896 +0000 UTC m=+0.022080879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:13:50 compute-0 podman[91357]: 2026-01-29 09:13:50.313229661 +0000 UTC m=+0.129674644 container init 3f9aec4984e361847f5010a9df78a2d419c474c5b9435e18f05b54c1214a5a3c (image=quay.io/ceph/ceph:v20, name=lucid_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:13:50 compute-0 podman[91357]: 2026-01-29 09:13:50.318337745 +0000 UTC m=+0.134782698 container start 3f9aec4984e361847f5010a9df78a2d419c474c5b9435e18f05b54c1214a5a3c (image=quay.io/ceph/ceph:v20, name=lucid_haslett, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 29 09:13:50 compute-0 podman[91357]: 2026-01-29 09:13:50.32270913 +0000 UTC m=+0.139154083 container attach 3f9aec4984e361847f5010a9df78a2d419c474c5b9435e18f05b54c1214a5a3c (image=quay.io/ceph/ceph:v20, name=lucid_haslett, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:13:50 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Jan 29 09:13:50 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1353002630' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 29 09:13:50 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 29 09:13:50 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1353002630' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 29 09:13:50 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Jan 29 09:13:50 compute-0 lucid_haslett[91373]: enabled application 'rbd' on pool 'backups'
Jan 29 09:13:50 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Jan 29 09:13:50 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2327285045' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 29 09:13:50 compute-0 ceph-mon[75183]: osdmap e32: 3 total, 3 up, 3 in
Jan 29 09:13:50 compute-0 ceph-mon[75183]: pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:50 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1353002630' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 29 09:13:50 compute-0 systemd[1]: libpod-3f9aec4984e361847f5010a9df78a2d419c474c5b9435e18f05b54c1214a5a3c.scope: Deactivated successfully.
Jan 29 09:13:50 compute-0 conmon[91373]: conmon 3f9aec4984e361847f50 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f9aec4984e361847f5010a9df78a2d419c474c5b9435e18f05b54c1214a5a3c.scope/container/memory.events
Jan 29 09:13:50 compute-0 podman[91357]: 2026-01-29 09:13:50.888454526 +0000 UTC m=+0.704899479 container died 3f9aec4984e361847f5010a9df78a2d419c474c5b9435e18f05b54c1214a5a3c (image=quay.io/ceph/ceph:v20, name=lucid_haslett, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:13:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b44939a1d06cbb7d5efec02a3eb6311735e85d96a5b6a384683c47e693453409-merged.mount: Deactivated successfully.
Jan 29 09:13:50 compute-0 podman[91357]: 2026-01-29 09:13:50.921794468 +0000 UTC m=+0.738239421 container remove 3f9aec4984e361847f5010a9df78a2d419c474c5b9435e18f05b54c1214a5a3c (image=quay.io/ceph/ceph:v20, name=lucid_haslett, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:13:50 compute-0 systemd[1]: libpod-conmon-3f9aec4984e361847f5010a9df78a2d419c474c5b9435e18f05b54c1214a5a3c.scope: Deactivated successfully.
Jan 29 09:13:50 compute-0 sudo[91354]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:51 compute-0 sudo[91433]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xadpeyrqpfmduaxdfgyavekvbexsdqip ; /usr/bin/python3'
Jan 29 09:13:51 compute-0 sudo[91433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:13:51 compute-0 python3[91435]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:13:51 compute-0 podman[91436]: 2026-01-29 09:13:51.277420015 +0000 UTC m=+0.038439857 container create 2d5e51d752380e84ba114d26ec1fce62c901bcb599b987497dadeab02e570ea2 (image=quay.io/ceph/ceph:v20, name=nice_swanson, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 09:13:51 compute-0 systemd[1]: Started libpod-conmon-2d5e51d752380e84ba114d26ec1fce62c901bcb599b987497dadeab02e570ea2.scope.
Jan 29 09:13:51 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5941a3a7e0e3b3a23dd30585b6fcbe92034019b99e17ea236f301cf02e19993/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5941a3a7e0e3b3a23dd30585b6fcbe92034019b99e17ea236f301cf02e19993/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:51 compute-0 podman[91436]: 2026-01-29 09:13:51.339003897 +0000 UTC m=+0.100023749 container init 2d5e51d752380e84ba114d26ec1fce62c901bcb599b987497dadeab02e570ea2 (image=quay.io/ceph/ceph:v20, name=nice_swanson, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 29 09:13:51 compute-0 podman[91436]: 2026-01-29 09:13:51.344693446 +0000 UTC m=+0.105713298 container start 2d5e51d752380e84ba114d26ec1fce62c901bcb599b987497dadeab02e570ea2 (image=quay.io/ceph/ceph:v20, name=nice_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:13:51 compute-0 podman[91436]: 2026-01-29 09:13:51.349254395 +0000 UTC m=+0.110274277 container attach 2d5e51d752380e84ba114d26ec1fce62c901bcb599b987497dadeab02e570ea2 (image=quay.io/ceph/ceph:v20, name=nice_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 09:13:51 compute-0 podman[91436]: 2026-01-29 09:13:51.259993649 +0000 UTC m=+0.021013521 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:13:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Jan 29 09:13:51 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/317578345' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 29 09:13:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 29 09:13:51 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1353002630' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 29 09:13:51 compute-0 ceph-mon[75183]: osdmap e33: 3 total, 3 up, 3 in
Jan 29 09:13:51 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/317578345' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 29 09:13:51 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/317578345' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 29 09:13:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 29 09:13:51 compute-0 nice_swanson[91452]: enabled application 'rbd' on pool 'images'
Jan 29 09:13:51 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 29 09:13:51 compute-0 systemd[1]: libpod-2d5e51d752380e84ba114d26ec1fce62c901bcb599b987497dadeab02e570ea2.scope: Deactivated successfully.
Jan 29 09:13:51 compute-0 podman[91436]: 2026-01-29 09:13:51.920262249 +0000 UTC m=+0.681282101 container died 2d5e51d752380e84ba114d26ec1fce62c901bcb599b987497dadeab02e570ea2 (image=quay.io/ceph/ceph:v20, name=nice_swanson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 09:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5941a3a7e0e3b3a23dd30585b6fcbe92034019b99e17ea236f301cf02e19993-merged.mount: Deactivated successfully.
Jan 29 09:13:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v83: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:51 compute-0 podman[91436]: 2026-01-29 09:13:51.967610438 +0000 UTC m=+0.728630300 container remove 2d5e51d752380e84ba114d26ec1fce62c901bcb599b987497dadeab02e570ea2 (image=quay.io/ceph/ceph:v20, name=nice_swanson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:13:51 compute-0 systemd[1]: libpod-conmon-2d5e51d752380e84ba114d26ec1fce62c901bcb599b987497dadeab02e570ea2.scope: Deactivated successfully.
Jan 29 09:13:51 compute-0 sudo[91433]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:52 compute-0 sudo[91511]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuxvhzpcczqflwnkcxyiwliceuptpupz ; /usr/bin/python3'
Jan 29 09:13:52 compute-0 sudo[91511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:13:52 compute-0 python3[91513]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:13:52 compute-0 podman[91514]: 2026-01-29 09:13:52.326281855 +0000 UTC m=+0.040777709 container create ae2a549f5a6e9cfd8b5994b312bca7eb3bf7a05d161dfdf0a4a977524829db32 (image=quay.io/ceph/ceph:v20, name=charming_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 09:13:52 compute-0 systemd[1]: Started libpod-conmon-ae2a549f5a6e9cfd8b5994b312bca7eb3bf7a05d161dfdf0a4a977524829db32.scope.
Jan 29 09:13:52 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/815fe50fad59215ef0e59f6b858fc17227a7d2337e982d0f3c608885ca643cef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/815fe50fad59215ef0e59f6b858fc17227a7d2337e982d0f3c608885ca643cef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:52 compute-0 podman[91514]: 2026-01-29 09:13:52.305054869 +0000 UTC m=+0.019550753 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:13:52 compute-0 podman[91514]: 2026-01-29 09:13:52.406919805 +0000 UTC m=+0.121415689 container init ae2a549f5a6e9cfd8b5994b312bca7eb3bf7a05d161dfdf0a4a977524829db32 (image=quay.io/ceph/ceph:v20, name=charming_wright, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:13:52 compute-0 podman[91514]: 2026-01-29 09:13:52.411608548 +0000 UTC m=+0.126104402 container start ae2a549f5a6e9cfd8b5994b312bca7eb3bf7a05d161dfdf0a4a977524829db32 (image=quay.io/ceph/ceph:v20, name=charming_wright, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:13:52 compute-0 podman[91514]: 2026-01-29 09:13:52.416211918 +0000 UTC m=+0.130707962 container attach ae2a549f5a6e9cfd8b5994b312bca7eb3bf7a05d161dfdf0a4a977524829db32 (image=quay.io/ceph/ceph:v20, name=charming_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:13:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Jan 29 09:13:52 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3183565838' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 29 09:13:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 29 09:13:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/317578345' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 29 09:13:52 compute-0 ceph-mon[75183]: osdmap e34: 3 total, 3 up, 3 in
Jan 29 09:13:52 compute-0 ceph-mon[75183]: pgmap v83: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3183565838' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 29 09:13:52 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3183565838' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 29 09:13:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 29 09:13:52 compute-0 charming_wright[91530]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 29 09:13:52 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 29 09:13:52 compute-0 systemd[1]: libpod-ae2a549f5a6e9cfd8b5994b312bca7eb3bf7a05d161dfdf0a4a977524829db32.scope: Deactivated successfully.
Jan 29 09:13:52 compute-0 podman[91555]: 2026-01-29 09:13:52.972753043 +0000 UTC m=+0.024241425 container died ae2a549f5a6e9cfd8b5994b312bca7eb3bf7a05d161dfdf0a4a977524829db32 (image=quay.io/ceph/ceph:v20, name=charming_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 09:13:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-815fe50fad59215ef0e59f6b858fc17227a7d2337e982d0f3c608885ca643cef-merged.mount: Deactivated successfully.
Jan 29 09:13:53 compute-0 podman[91555]: 2026-01-29 09:13:53.010011618 +0000 UTC m=+0.061499980 container remove ae2a549f5a6e9cfd8b5994b312bca7eb3bf7a05d161dfdf0a4a977524829db32 (image=quay.io/ceph/ceph:v20, name=charming_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 29 09:13:53 compute-0 systemd[1]: libpod-conmon-ae2a549f5a6e9cfd8b5994b312bca7eb3bf7a05d161dfdf0a4a977524829db32.scope: Deactivated successfully.
Jan 29 09:13:53 compute-0 sudo[91511]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:53 compute-0 sudo[91593]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmgndvjgstwqkkhrvdtgoeeyldzrkanm ; /usr/bin/python3'
Jan 29 09:13:53 compute-0 sudo[91593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:13:53 compute-0 python3[91595]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:13:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:13:53 compute-0 podman[91596]: 2026-01-29 09:13:53.344706667 +0000 UTC m=+0.061045359 container create cbd6add13c428edf66a779a58af926b5942db88e7e1f8ce3f813bad7895d61f8 (image=quay.io/ceph/ceph:v20, name=hardcore_maxwell, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:13:53 compute-0 systemd[1]: Started libpod-conmon-cbd6add13c428edf66a779a58af926b5942db88e7e1f8ce3f813bad7895d61f8.scope.
Jan 29 09:13:53 compute-0 podman[91596]: 2026-01-29 09:13:53.309003202 +0000 UTC m=+0.025341914 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:13:53 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f3c0ce1ff2f2cd5fbb43ff706a066c54a309ac1a18e621a68ab8eadb5a9c4e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f3c0ce1ff2f2cd5fbb43ff706a066c54a309ac1a18e621a68ab8eadb5a9c4e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:53 compute-0 podman[91596]: 2026-01-29 09:13:53.427500583 +0000 UTC m=+0.143839295 container init cbd6add13c428edf66a779a58af926b5942db88e7e1f8ce3f813bad7895d61f8 (image=quay.io/ceph/ceph:v20, name=hardcore_maxwell, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 09:13:53 compute-0 podman[91596]: 2026-01-29 09:13:53.431348684 +0000 UTC m=+0.147687376 container start cbd6add13c428edf66a779a58af926b5942db88e7e1f8ce3f813bad7895d61f8 (image=quay.io/ceph/ceph:v20, name=hardcore_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 29 09:13:53 compute-0 podman[91596]: 2026-01-29 09:13:53.435183724 +0000 UTC m=+0.151522436 container attach cbd6add13c428edf66a779a58af926b5942db88e7e1f8ce3f813bad7895d61f8 (image=quay.io/ceph/ceph:v20, name=hardcore_maxwell, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:13:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Jan 29 09:13:53 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/17953285' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 29 09:13:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 29 09:13:53 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3183565838' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 29 09:13:53 compute-0 ceph-mon[75183]: osdmap e35: 3 total, 3 up, 3 in
Jan 29 09:13:53 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/17953285' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 29 09:13:53 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/17953285' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 29 09:13:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 29 09:13:53 compute-0 hardcore_maxwell[91612]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 29 09:13:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v86: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:53 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 29 09:13:53 compute-0 systemd[1]: libpod-cbd6add13c428edf66a779a58af926b5942db88e7e1f8ce3f813bad7895d61f8.scope: Deactivated successfully.
Jan 29 09:13:53 compute-0 podman[91596]: 2026-01-29 09:13:53.977721843 +0000 UTC m=+0.694060535 container died cbd6add13c428edf66a779a58af926b5942db88e7e1f8ce3f813bad7895d61f8 (image=quay.io/ceph/ceph:v20, name=hardcore_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:13:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-55f3c0ce1ff2f2cd5fbb43ff706a066c54a309ac1a18e621a68ab8eadb5a9c4e-merged.mount: Deactivated successfully.
Jan 29 09:13:54 compute-0 podman[91596]: 2026-01-29 09:13:54.260313778 +0000 UTC m=+0.976652470 container remove cbd6add13c428edf66a779a58af926b5942db88e7e1f8ce3f813bad7895d61f8 (image=quay.io/ceph/ceph:v20, name=hardcore_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 29 09:13:54 compute-0 sudo[91593]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:54 compute-0 systemd[1]: libpod-conmon-cbd6add13c428edf66a779a58af926b5942db88e7e1f8ce3f813bad7895d61f8.scope: Deactivated successfully.
Jan 29 09:13:55 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/17953285' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 29 09:13:55 compute-0 ceph-mon[75183]: pgmap v86: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:55 compute-0 ceph-mon[75183]: osdmap e36: 3 total, 3 up, 3 in
Jan 29 09:13:55 compute-0 python3[91724]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 09:13:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:13:55
Jan 29 09:13:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:13:55 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:13:55 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'volumes', 'vms', '.mgr', 'cephfs.cephfs.data', 'images']
Jan 29 09:13:55 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:13:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v87: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:56 compute-0 ceph-mon[75183]: pgmap v87: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:56 compute-0 python3[91795]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769678035.6284308-36793-215721812187592/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 29 09:13:56 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Jan 29 09:13:56 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 29 09:13:56 compute-0 sudo[91843]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahuynprdkjsamgutbhuqvxdojyufaety ; /usr/bin/python3'
Jan 29 09:13:56 compute-0 sudo[91843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:13:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:13:56 compute-0 python3[91845]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:13:56 compute-0 podman[91846]: 2026-01-29 09:13:56.731750216 +0000 UTC m=+0.038843327 container create b284df775071217e2a28738014d0b5dbd50fc0fe7485048aac82bcdcfe44c9e1 (image=quay.io/ceph/ceph:v20, name=objective_taussig, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:13:56 compute-0 systemd[1]: Started libpod-conmon-b284df775071217e2a28738014d0b5dbd50fc0fe7485048aac82bcdcfe44c9e1.scope.
Jan 29 09:13:56 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/120ac140250aa4cdf5c4f652632c4c0d02d06703b37d01b91a5f44980baa9978/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/120ac140250aa4cdf5c4f652632c4c0d02d06703b37d01b91a5f44980baa9978/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/120ac140250aa4cdf5c4f652632c4c0d02d06703b37d01b91a5f44980baa9978/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:56 compute-0 podman[91846]: 2026-01-29 09:13:56.712302227 +0000 UTC m=+0.019395358 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:13:56 compute-0 podman[91846]: 2026-01-29 09:13:56.811233337 +0000 UTC m=+0.118326438 container init b284df775071217e2a28738014d0b5dbd50fc0fe7485048aac82bcdcfe44c9e1 (image=quay.io/ceph/ceph:v20, name=objective_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:13:56 compute-0 podman[91846]: 2026-01-29 09:13:56.81557805 +0000 UTC m=+0.122671161 container start b284df775071217e2a28738014d0b5dbd50fc0fe7485048aac82bcdcfe44c9e1 (image=quay.io/ceph/ceph:v20, name=objective_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:13:56 compute-0 podman[91846]: 2026-01-29 09:13:56.819548964 +0000 UTC m=+0.126642105 container attach b284df775071217e2a28738014d0b5dbd50fc0fe7485048aac82bcdcfe44c9e1 (image=quay.io/ceph/ceph:v20, name=objective_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 29 09:13:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 29 09:13:57 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 29 09:13:57 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 29 09:13:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 29 09:13:57 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 29 09:13:57 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev 32f24b3d-9251-4123-8192-226f49c67beb (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 29 09:13:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Jan 29 09:13:57 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 29 09:13:57 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14232 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:13:57 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 29 09:13:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Jan 29 09:13:57 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 29 09:13:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Jan 29 09:13:57 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 29 09:13:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Jan 29 09:13:57 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 29 09:13:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 29 09:13:57 compute-0 ceph-mon[75183]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 29 09:13:57 compute-0 ceph-mon[75183]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 29 09:13:57 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0[75179]: 2026-01-29T09:13:57.308+0000 7f7e13597640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 29 09:13:57 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 29 09:13:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).mds e2 new map
Jan 29 09:13:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2026-01-29T09:13:57:309478+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-29T09:13:57.309237+0000
                                           modified        2026-01-29T09:13:57.309237+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Jan 29 09:13:57 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 29 09:13:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 29 09:13:57 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 29 09:13:57 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 29 09:13:57 compute-0 ceph-mgr[75473]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 29 09:13:57 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 29 09:13:57 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev b7dd55ff-461c-4acb-9fdc-82aa461e714c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 29 09:13:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 29 09:13:57 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 29 09:13:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 29 09:13:57 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:57 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 29 09:13:57 compute-0 systemd[1]: libpod-b284df775071217e2a28738014d0b5dbd50fc0fe7485048aac82bcdcfe44c9e1.scope: Deactivated successfully.
Jan 29 09:13:57 compute-0 podman[91846]: 2026-01-29 09:13:57.358478318 +0000 UTC m=+0.665571449 container died b284df775071217e2a28738014d0b5dbd50fc0fe7485048aac82bcdcfe44c9e1 (image=quay.io/ceph/ceph:v20, name=objective_taussig, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:13:57 compute-0 sudo[91885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:13:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-120ac140250aa4cdf5c4f652632c4c0d02d06703b37d01b91a5f44980baa9978-merged.mount: Deactivated successfully.
Jan 29 09:13:57 compute-0 sudo[91885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:13:57 compute-0 sudo[91885]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:57 compute-0 podman[91846]: 2026-01-29 09:13:57.399727478 +0000 UTC m=+0.706820599 container remove b284df775071217e2a28738014d0b5dbd50fc0fe7485048aac82bcdcfe44c9e1 (image=quay.io/ceph/ceph:v20, name=objective_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 09:13:57 compute-0 systemd[1]: libpod-conmon-b284df775071217e2a28738014d0b5dbd50fc0fe7485048aac82bcdcfe44c9e1.scope: Deactivated successfully.
Jan 29 09:13:57 compute-0 sudo[91843]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:57 compute-0 sudo[91923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 29 09:13:57 compute-0 sudo[91923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:13:57 compute-0 sudo[91971]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szneobtnrcppmjnpviqcocpoqkoygxrs ; /usr/bin/python3'
Jan 29 09:13:57 compute-0 sudo[91971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:13:57 compute-0 python3[91973]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:13:57 compute-0 podman[91997]: 2026-01-29 09:13:57.753737653 +0000 UTC m=+0.042445872 container create 48361ae8a1d35b0102e143ab5ad8bf23fdf336afcbb7a1afd3452c842e12c545 (image=quay.io/ceph/ceph:v20, name=vibrant_galois, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:13:57 compute-0 systemd[1]: Started libpod-conmon-48361ae8a1d35b0102e143ab5ad8bf23fdf336afcbb7a1afd3452c842e12c545.scope.
Jan 29 09:13:57 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b47486f96509aa6eb42976b52e2fdaa17af81a91733d56062c2e526ee9ac04c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b47486f96509aa6eb42976b52e2fdaa17af81a91733d56062c2e526ee9ac04c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b47486f96509aa6eb42976b52e2fdaa17af81a91733d56062c2e526ee9ac04c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:13:57 compute-0 podman[91997]: 2026-01-29 09:13:57.733616786 +0000 UTC m=+0.022325035 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:13:57 compute-0 podman[91997]: 2026-01-29 09:13:57.838862361 +0000 UTC m=+0.127570610 container init 48361ae8a1d35b0102e143ab5ad8bf23fdf336afcbb7a1afd3452c842e12c545 (image=quay.io/ceph/ceph:v20, name=vibrant_galois, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 29 09:13:57 compute-0 podman[91997]: 2026-01-29 09:13:57.844079237 +0000 UTC m=+0.132787456 container start 48361ae8a1d35b0102e143ab5ad8bf23fdf336afcbb7a1afd3452c842e12c545 (image=quay.io/ceph/ceph:v20, name=vibrant_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:13:57 compute-0 podman[92030]: 2026-01-29 09:13:57.84572099 +0000 UTC m=+0.063300358 container exec 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 09:13:57 compute-0 podman[91997]: 2026-01-29 09:13:57.847750403 +0000 UTC m=+0.136458632 container attach 48361ae8a1d35b0102e143ab5ad8bf23fdf336afcbb7a1afd3452c842e12c545 (image=quay.io/ceph/ceph:v20, name=vibrant_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:13:57 compute-0 podman[92030]: 2026-01-29 09:13:57.958893962 +0000 UTC m=+0.176473300 container exec_died 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 29 09:13:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v90: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 29 09:13:57 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 09:13:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 29 09:13:57 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 09:13:58 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 29 09:13:58 compute-0 ceph-mon[75183]: osdmap e37: 3 total, 3 up, 3 in
Jan 29 09:13:58 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 29 09:13:58 compute-0 ceph-mon[75183]: from='client.14232 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:13:58 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 29 09:13:58 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 29 09:13:58 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 29 09:13:58 compute-0 ceph-mon[75183]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 29 09:13:58 compute-0 ceph-mon[75183]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 29 09:13:58 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 29 09:13:58 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 29 09:13:58 compute-0 ceph-mon[75183]: osdmap e38: 3 total, 3 up, 3 in
Jan 29 09:13:58 compute-0 ceph-mon[75183]: fsmap cephfs:0
Jan 29 09:13:58 compute-0 ceph-mon[75183]: Saving service mds.cephfs spec with placement compute-0
Jan 29 09:13:58 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 29 09:13:58 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:58 compute-0 ceph-mon[75183]: pgmap v90: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:58 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 09:13:58 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 09:13:58 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:13:58 compute-0 ceph-mgr[75473]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 29 09:13:58 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 29 09:13:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 29 09:13:58 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:58 compute-0 vibrant_galois[92042]: Scheduled mds.cephfs update...
Jan 29 09:13:58 compute-0 systemd[1]: libpod-48361ae8a1d35b0102e143ab5ad8bf23fdf336afcbb7a1afd3452c842e12c545.scope: Deactivated successfully.
Jan 29 09:13:58 compute-0 podman[91997]: 2026-01-29 09:13:58.290461559 +0000 UTC m=+0.579169778 container died 48361ae8a1d35b0102e143ab5ad8bf23fdf336afcbb7a1afd3452c842e12c545 (image=quay.io/ceph/ceph:v20, name=vibrant_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 29 09:13:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:13:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b47486f96509aa6eb42976b52e2fdaa17af81a91733d56062c2e526ee9ac04c-merged.mount: Deactivated successfully.
Jan 29 09:13:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 29 09:13:58 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 29 09:13:58 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 09:13:58 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 09:13:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 29 09:13:58 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 29 09:13:58 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev 05bf685a-3097-44a8-ac9c-5dca1daccbf6 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 29 09:13:58 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 39 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=39 pruub=12.787563324s) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active pruub 72.987030029s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:13:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Jan 29 09:13:58 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 29 09:13:58 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 39 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=39 pruub=12.787563324s) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown pruub 72.987030029s@ mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:58 compute-0 podman[91997]: 2026-01-29 09:13:58.360838351 +0000 UTC m=+0.649546570 container remove 48361ae8a1d35b0102e143ab5ad8bf23fdf336afcbb7a1afd3452c842e12c545 (image=quay.io/ceph/ceph:v20, name=vibrant_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 09:13:58 compute-0 sudo[91971]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:58 compute-0 systemd[1]: libpod-conmon-48361ae8a1d35b0102e143ab5ad8bf23fdf336afcbb7a1afd3452c842e12c545.scope: Deactivated successfully.
Jan 29 09:13:58 compute-0 sudo[91923]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:13:58 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:13:58 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:58 compute-0 sudo[92218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:13:58 compute-0 sudo[92218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:13:58 compute-0 sudo[92218]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:58 compute-0 sudo[92243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:13:58 compute-0 sudo[92243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 39 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=39 pruub=9.437110901s) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active pruub 61.006111145s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 39 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=39 pruub=9.437110901s) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown pruub 61.006111145s@ mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 sudo[92374]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuajzyfghsfrfahluztmhmoaluqwbchy ; /usr/bin/python3'
Jan 29 09:13:59 compute-0 sudo[92374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:13:59 compute-0 sudo[92243]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:13:59 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:13:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:13:59 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:13:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:13:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 29 09:13:59 compute-0 ceph-mon[75183]: from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:13:59 compute-0 ceph-mon[75183]: Saving service mds.cephfs spec with placement compute-0
Jan 29 09:13:59 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:59 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 29 09:13:59 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 09:13:59 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 09:13:59 compute-0 ceph-mon[75183]: osdmap e39: 3 total, 3 up, 3 in
Jan 29 09:13:59 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 29 09:13:59 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:59 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:59 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 29 09:13:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 29 09:13:59 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:13:59 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.1f( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.1e( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.1c( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.1b( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.a( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.8( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.7( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.6( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.5( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.3( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.1( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.4( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.2( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.c( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.d( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.b( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.f( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.10( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.12( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.13( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.14( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.17( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.18( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.19( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.1a( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1f( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1e( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1d( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1c( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.16( empty local-lis/les=21/22 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.b( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:13:59 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.9( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.8( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.6( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.4( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.3( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.2( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.7( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.c( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.e( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.d( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.1c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.4( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.0( empty local-lis/les=39/40 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.10( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev 24705217-1baa-459f-ab9a-ee06dcc5dec1 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.13( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.14( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.2( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.10( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.11( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.12( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.14( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.16( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.17( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.18( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.19( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1a( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1b( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.15( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1e( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.1a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.19( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:13:59 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.0( empty local-lis/les=39/40 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.e( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.10( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.14( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.12( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=21/21 les/c/f=22/22/0 sis=39) [1] r=0 lpr=39 pi=[21,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:13:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:13:59 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:13:59 compute-0 python3[92376]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 09:13:59 compute-0 sudo[92374]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:59 compute-0 sudo[92377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:13:59 compute-0 sudo[92377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:13:59 compute-0 sudo[92377]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:59 compute-0 sudo[92423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:13:59 compute-0 sudo[92423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:13:59 compute-0 sudo[92497]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pysedbtbwoaclqsoxonmbigplclxmzln ; /usr/bin/python3'
Jan 29 09:13:59 compute-0 sudo[92497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:13:59 compute-0 python3[92499]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769678039.1174371-36842-235617741328080/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=c5b8879246a4c24c186fd747854ce7c5cfee178a backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:13:59 compute-0 sudo[92497]: pam_unix(sudo:session): session closed for user root
Jan 29 09:13:59 compute-0 podman[92512]: 2026-01-29 09:13:59.814233288 +0000 UTC m=+0.040661945 container create 84aea9038380def73719abcb39cc1248654b3d12a525dfe8cee25f24c749d735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lamport, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 09:13:59 compute-0 podman[92512]: 2026-01-29 09:13:59.796875973 +0000 UTC m=+0.023304640 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:13:59 compute-0 systemd[1]: Started libpod-conmon-84aea9038380def73719abcb39cc1248654b3d12a525dfe8cee25f24c749d735.scope.
Jan 29 09:13:59 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:13:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v93: 69 pgs: 1 peering, 62 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:13:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 29 09:13:59 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 09:13:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 29 09:13:59 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 09:14:00 compute-0 sudo[92578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpqcziudkjlmvhxmooahekqcefxqttem ; /usr/bin/python3'
Jan 29 09:14:00 compute-0 sudo[92578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:14:00 compute-0 podman[92512]: 2026-01-29 09:14:00.088184297 +0000 UTC m=+0.314612984 container init 84aea9038380def73719abcb39cc1248654b3d12a525dfe8cee25f24c749d735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lamport, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:14:00 compute-0 podman[92512]: 2026-01-29 09:14:00.095727865 +0000 UTC m=+0.322156522 container start 84aea9038380def73719abcb39cc1248654b3d12a525dfe8cee25f24c749d735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lamport, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:14:00 compute-0 focused_lamport[92552]: 167 167
Jan 29 09:14:00 compute-0 systemd[1]: libpod-84aea9038380def73719abcb39cc1248654b3d12a525dfe8cee25f24c749d735.scope: Deactivated successfully.
Jan 29 09:14:00 compute-0 podman[92512]: 2026-01-29 09:14:00.105923372 +0000 UTC m=+0.332352079 container attach 84aea9038380def73719abcb39cc1248654b3d12a525dfe8cee25f24c749d735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lamport, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:14:00 compute-0 podman[92512]: 2026-01-29 09:14:00.106553068 +0000 UTC m=+0.332981735 container died 84aea9038380def73719abcb39cc1248654b3d12a525dfe8cee25f24c749d735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lamport, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:14:00 compute-0 python3[92580]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:14:00 compute-0 podman[92593]: 2026-01-29 09:14:00.296030086 +0000 UTC m=+0.024469721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:14:00 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 29 09:14:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e54d837a259287199dff2dd80d9f96b1489ba43ac57ebf432d792c047af968a-merged.mount: Deactivated successfully.
Jan 29 09:14:00 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:14:00 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:14:00 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 29 09:14:00 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:00 compute-0 ceph-mon[75183]: osdmap e40: 3 total, 3 up, 3 in
Jan 29 09:14:00 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:14:00 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 29 09:14:00 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:14:00 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:14:00 compute-0 ceph-mon[75183]: pgmap v93: 69 pgs: 1 peering, 62 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:00 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 09:14:00 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 09:14:00 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Jan 29 09:14:00 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 09:14:00 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 09:14:00 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 29 09:14:00 compute-0 podman[92512]: 2026-01-29 09:14:00.43602384 +0000 UTC m=+0.662452497 container remove 84aea9038380def73719abcb39cc1248654b3d12a525dfe8cee25f24c749d735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 09:14:00 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 29 09:14:00 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev 648ed832-c679-4ead-8b7f-3768684162c7 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Jan 29 09:14:00 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Jan 29 09:14:00 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 29 09:14:00 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=15.280164719s) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active pruub 68.130348206s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:00 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=15.280164719s) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown pruub 68.130348206s@ mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:00 compute-0 systemd[1]: libpod-conmon-84aea9038380def73719abcb39cc1248654b3d12a525dfe8cee25f24c749d735.scope: Deactivated successfully.
Jan 29 09:14:00 compute-0 podman[92593]: 2026-01-29 09:14:00.567988313 +0000 UTC m=+0.296427918 container create 76effb8c8092158f0934841a4633bd8a477fae26425b8a78310445d4b8de9e51 (image=quay.io/ceph/ceph:v20, name=jovial_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 29 09:14:00 compute-0 systemd[1]: Started libpod-conmon-76effb8c8092158f0934841a4633bd8a477fae26425b8a78310445d4b8de9e51.scope.
Jan 29 09:14:00 compute-0 podman[92614]: 2026-01-29 09:14:00.616453722 +0000 UTC m=+0.096641330 container create d82df8da81a1c3b834bdf895493a328fb942985394b7b5c741bc1e96751700ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meninsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:14:00 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c35b3b653d153072af2b161ee01dccf91182aedcd4520e1ee33fd2f24ea663/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c35b3b653d153072af2b161ee01dccf91182aedcd4520e1ee33fd2f24ea663/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:00 compute-0 systemd[1]: Started libpod-conmon-d82df8da81a1c3b834bdf895493a328fb942985394b7b5c741bc1e96751700ee.scope.
Jan 29 09:14:00 compute-0 podman[92614]: 2026-01-29 09:14:00.57818654 +0000 UTC m=+0.058374158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:14:00 compute-0 podman[92593]: 2026-01-29 09:14:00.681023492 +0000 UTC m=+0.409463117 container init 76effb8c8092158f0934841a4633bd8a477fae26425b8a78310445d4b8de9e51 (image=quay.io/ceph/ceph:v20, name=jovial_rubin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:14:00 compute-0 podman[92593]: 2026-01-29 09:14:00.686604138 +0000 UTC m=+0.415043743 container start 76effb8c8092158f0934841a4633bd8a477fae26425b8a78310445d4b8de9e51 (image=quay.io/ceph/ceph:v20, name=jovial_rubin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:14:00 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:00 compute-0 podman[92593]: 2026-01-29 09:14:00.691761823 +0000 UTC m=+0.420201478 container attach 76effb8c8092158f0934841a4633bd8a477fae26425b8a78310445d4b8de9e51 (image=quay.io/ceph/ceph:v20, name=jovial_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 29 09:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/518b648e604cc7f308ba8c3d5320969212af4bcf1be9882e91c5f3d06f07e0ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/518b648e604cc7f308ba8c3d5320969212af4bcf1be9882e91c5f3d06f07e0ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/518b648e604cc7f308ba8c3d5320969212af4bcf1be9882e91c5f3d06f07e0ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/518b648e604cc7f308ba8c3d5320969212af4bcf1be9882e91c5f3d06f07e0ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/518b648e604cc7f308ba8c3d5320969212af4bcf1be9882e91c5f3d06f07e0ca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:00 compute-0 podman[92614]: 2026-01-29 09:14:00.70962681 +0000 UTC m=+0.189814468 container init d82df8da81a1c3b834bdf895493a328fb942985394b7b5c741bc1e96751700ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 29 09:14:00 compute-0 podman[92614]: 2026-01-29 09:14:00.716308645 +0000 UTC m=+0.196496253 container start d82df8da81a1c3b834bdf895493a328fb942985394b7b5c741bc1e96751700ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meninsky, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 09:14:00 compute-0 podman[92614]: 2026-01-29 09:14:00.721994354 +0000 UTC m=+0.202182052 container attach d82df8da81a1c3b834bdf895493a328fb942985394b7b5c741bc1e96751700ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meninsky, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:14:01 compute-0 gracious_meninsky[92636]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:14:01 compute-0 gracious_meninsky[92636]: --> All data devices are unavailable
Jan 29 09:14:01 compute-0 systemd[1]: libpod-d82df8da81a1c3b834bdf895493a328fb942985394b7b5c741bc1e96751700ee.scope: Deactivated successfully.
Jan 29 09:14:01 compute-0 conmon[92636]: conmon d82df8da81a1c3b834bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d82df8da81a1c3b834bdf895493a328fb942985394b7b5c741bc1e96751700ee.scope/container/memory.events
Jan 29 09:14:01 compute-0 podman[92614]: 2026-01-29 09:14:01.22277747 +0000 UTC m=+0.702965078 container died d82df8da81a1c3b834bdf895493a328fb942985394b7b5c741bc1e96751700ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:14:01 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Jan 29 09:14:01 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3940389600' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 29 09:14:01 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3940389600' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 29 09:14:01 compute-0 systemd[1]: libpod-76effb8c8092158f0934841a4633bd8a477fae26425b8a78310445d4b8de9e51.scope: Deactivated successfully.
Jan 29 09:14:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-518b648e604cc7f308ba8c3d5320969212af4bcf1be9882e91c5f3d06f07e0ca-merged.mount: Deactivated successfully.
Jan 29 09:14:01 compute-0 podman[92593]: 2026-01-29 09:14:01.302225979 +0000 UTC m=+1.030665584 container died 76effb8c8092158f0934841a4633bd8a477fae26425b8a78310445d4b8de9e51 (image=quay.io/ceph/ceph:v20, name=jovial_rubin, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:14:01 compute-0 podman[92614]: 2026-01-29 09:14:01.331709481 +0000 UTC m=+0.811897089 container remove d82df8da81a1c3b834bdf895493a328fb942985394b7b5c741bc1e96751700ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 29 09:14:01 compute-0 systemd[1]: libpod-conmon-d82df8da81a1c3b834bdf895493a328fb942985394b7b5c741bc1e96751700ee.scope: Deactivated successfully.
Jan 29 09:14:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-64c35b3b653d153072af2b161ee01dccf91182aedcd4520e1ee33fd2f24ea663-merged.mount: Deactivated successfully.
Jan 29 09:14:01 compute-0 podman[92593]: 2026-01-29 09:14:01.372422166 +0000 UTC m=+1.100861771 container remove 76effb8c8092158f0934841a4633bd8a477fae26425b8a78310445d4b8de9e51 (image=quay.io/ceph/ceph:v20, name=jovial_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:14:01 compute-0 sudo[92423]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:01 compute-0 systemd[1]: libpod-conmon-76effb8c8092158f0934841a4633bd8a477fae26425b8a78310445d4b8de9e51.scope: Deactivated successfully.
Jan 29 09:14:01 compute-0 sudo[92578]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:01 compute-0 sudo[92703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:14:01 compute-0 sudo[92703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:01 compute-0 sudo[92703]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:01 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 29 09:14:01 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Jan 29 09:14:01 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 09:14:01 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 09:14:01 compute-0 ceph-mon[75183]: osdmap e41: 3 total, 3 up, 3 in
Jan 29 09:14:01 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 29 09:14:01 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3940389600' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 29 09:14:01 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3940389600' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 29 09:14:01 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 29 09:14:01 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 29 09:14:01 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 29 09:14:01 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev 80833a9b-87d9-4fcb-90f9-7d0cddb84530 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 29 09:14:01 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev 32f24b3d-9251-4123-8192-226f49c67beb (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 29 09:14:01 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event 32f24b3d-9251-4123-8192-226f49c67beb (PG autoscaler increasing pool 2 PGs from 1 to 32) in 4 seconds
Jan 29 09:14:01 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev b7dd55ff-461c-4acb-9fdc-82aa461e714c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 29 09:14:01 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event b7dd55ff-461c-4acb-9fdc-82aa461e714c (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Jan 29 09:14:01 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev 05bf685a-3097-44a8-ac9c-5dca1daccbf6 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 29 09:14:01 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event 05bf685a-3097-44a8-ac9c-5dca1daccbf6 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Jan 29 09:14:01 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev 24705217-1baa-459f-ab9a-ee06dcc5dec1 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 29 09:14:01 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event 24705217-1baa-459f-ab9a-ee06dcc5dec1 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Jan 29 09:14:01 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev 648ed832-c679-4ead-8b7f-3768684162c7 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Jan 29 09:14:01 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event 648ed832-c679-4ead-8b7f-3768684162c7 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Jan 29 09:14:01 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev 80833a9b-87d9-4fcb-90f9-7d0cddb84530 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 29 09:14:01 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event 80833a9b-87d9-4fcb-90f9-7d0cddb84530 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 41 pg[4.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=41 pruub=11.983744621s) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active pruub 86.802711487s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=41 pruub=11.983744621s) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown pruub 86.802711487s@ mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.1e( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.1d( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.1f( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.b( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.6( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.15( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.16( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.17( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.c( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.19( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.3( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.0( empty local-lis/les=41/42 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:01 compute-0 sudo[92728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:14:01 compute-0 sudo[92728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:01 compute-0 ceph-mgr[75473]: [progress INFO root] Writing back 9 completed events
Jan 29 09:14:01 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 29 09:14:01 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:01 compute-0 podman[92765]: 2026-01-29 09:14:01.757811082 +0000 UTC m=+0.042014180 container create fed4ca772e16ce37e74d32e120f680637d6e911ab2fd0df6471649a88a2c2ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_hopper, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:14:01 compute-0 systemd[1]: Started libpod-conmon-fed4ca772e16ce37e74d32e120f680637d6e911ab2fd0df6471649a88a2c2ef6.scope.
Jan 29 09:14:01 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:01 compute-0 podman[92765]: 2026-01-29 09:14:01.833875023 +0000 UTC m=+0.118078151 container init fed4ca772e16ce37e74d32e120f680637d6e911ab2fd0df6471649a88a2c2ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_hopper, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:14:01 compute-0 podman[92765]: 2026-01-29 09:14:01.739444282 +0000 UTC m=+0.023647400 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:14:01 compute-0 podman[92765]: 2026-01-29 09:14:01.84025942 +0000 UTC m=+0.124462518 container start fed4ca772e16ce37e74d32e120f680637d6e911ab2fd0df6471649a88a2c2ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_hopper, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 09:14:01 compute-0 podman[92765]: 2026-01-29 09:14:01.843510995 +0000 UTC m=+0.127714133 container attach fed4ca772e16ce37e74d32e120f680637d6e911ab2fd0df6471649a88a2c2ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:14:01 compute-0 goofy_hopper[92781]: 167 167
Jan 29 09:14:01 compute-0 systemd[1]: libpod-fed4ca772e16ce37e74d32e120f680637d6e911ab2fd0df6471649a88a2c2ef6.scope: Deactivated successfully.
Jan 29 09:14:01 compute-0 podman[92765]: 2026-01-29 09:14:01.846249077 +0000 UTC m=+0.130452175 container died fed4ca772e16ce37e74d32e120f680637d6e911ab2fd0df6471649a88a2c2ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:14:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0f056b1e6ae5455c7ed7d03094b3886cec937e993a98cbd66e77cbfd71770b8-merged.mount: Deactivated successfully.
Jan 29 09:14:01 compute-0 podman[92765]: 2026-01-29 09:14:01.883418809 +0000 UTC m=+0.167621907 container remove fed4ca772e16ce37e74d32e120f680637d6e911ab2fd0df6471649a88a2c2ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:14:01 compute-0 systemd[1]: libpod-conmon-fed4ca772e16ce37e74d32e120f680637d6e911ab2fd0df6471649a88a2c2ef6.scope: Deactivated successfully.
Jan 29 09:14:01 compute-0 sudo[92820]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdmhxuvmvktodsycdsxxxaahwwhwvwtm ; /usr/bin/python3'
Jan 29 09:14:01 compute-0 sudo[92820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:14:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v96: 131 pgs: 1 peering, 124 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:01 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Jan 29 09:14:01 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 09:14:01 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 29 09:14:01 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 09:14:02 compute-0 podman[92830]: 2026-01-29 09:14:02.022930151 +0000 UTC m=+0.047628088 container create f2cc322a88175622f747368ab01d656e606448480dcb3d6496446ba7a350508c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:14:02 compute-0 python3[92824]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:14:02 compute-0 systemd[1]: Started libpod-conmon-f2cc322a88175622f747368ab01d656e606448480dcb3d6496446ba7a350508c.scope.
Jan 29 09:14:02 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe4cf3effbaa625bc7c6beb1ebe131ab6bae0f8d3f10fb2dab34b816c25ba7a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe4cf3effbaa625bc7c6beb1ebe131ab6bae0f8d3f10fb2dab34b816c25ba7a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe4cf3effbaa625bc7c6beb1ebe131ab6bae0f8d3f10fb2dab34b816c25ba7a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe4cf3effbaa625bc7c6beb1ebe131ab6bae0f8d3f10fb2dab34b816c25ba7a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:02 compute-0 podman[92830]: 2026-01-29 09:14:02.092539982 +0000 UTC m=+0.117237959 container init f2cc322a88175622f747368ab01d656e606448480dcb3d6496446ba7a350508c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 29 09:14:02 compute-0 podman[92830]: 2026-01-29 09:14:02.005114574 +0000 UTC m=+0.029812541 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:14:02 compute-0 podman[92846]: 2026-01-29 09:14:02.095802468 +0000 UTC m=+0.043059118 container create 7a4d06132ed49bdae215c6bb1c823c597109152e49a86a2d999347342c0f3124 (image=quay.io/ceph/ceph:v20, name=cool_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 09:14:02 compute-0 podman[92830]: 2026-01-29 09:14:02.100325196 +0000 UTC m=+0.125023133 container start f2cc322a88175622f747368ab01d656e606448480dcb3d6496446ba7a350508c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_golick, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 29 09:14:02 compute-0 podman[92830]: 2026-01-29 09:14:02.107073843 +0000 UTC m=+0.131771780 container attach f2cc322a88175622f747368ab01d656e606448480dcb3d6496446ba7a350508c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 29 09:14:02 compute-0 systemd[1]: Started libpod-conmon-7a4d06132ed49bdae215c6bb1c823c597109152e49a86a2d999347342c0f3124.scope.
Jan 29 09:14:02 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad257338ac2c32c191596b472443c84c108a6e9484f71448e98b8187c26b4f4a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad257338ac2c32c191596b472443c84c108a6e9484f71448e98b8187c26b4f4a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:02 compute-0 podman[92846]: 2026-01-29 09:14:02.169461465 +0000 UTC m=+0.116718135 container init 7a4d06132ed49bdae215c6bb1c823c597109152e49a86a2d999347342c0f3124 (image=quay.io/ceph/ceph:v20, name=cool_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:14:02 compute-0 podman[92846]: 2026-01-29 09:14:02.073399731 +0000 UTC m=+0.020656411 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:14:02 compute-0 podman[92846]: 2026-01-29 09:14:02.174684042 +0000 UTC m=+0.121940682 container start 7a4d06132ed49bdae215c6bb1c823c597109152e49a86a2d999347342c0f3124 (image=quay.io/ceph/ceph:v20, name=cool_grothendieck, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 29 09:14:02 compute-0 podman[92846]: 2026-01-29 09:14:02.178503022 +0000 UTC m=+0.125759692 container attach 7a4d06132ed49bdae215c6bb1c823c597109152e49a86a2d999347342c0f3124 (image=quay.io/ceph/ceph:v20, name=cool_grothendieck, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 29 09:14:02 compute-0 quizzical_golick[92855]: {
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:     "0": [
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:         {
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "devices": [
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "/dev/loop3"
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             ],
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "lv_name": "ceph_lv0",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "lv_size": "21470642176",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "name": "ceph_lv0",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "tags": {
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.cluster_name": "ceph",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.crush_device_class": "",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.encrypted": "0",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.objectstore": "bluestore",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.osd_id": "0",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.type": "block",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.vdo": "0",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.with_tpm": "0"
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             },
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "type": "block",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "vg_name": "ceph_vg0"
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:         }
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:     ],
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:     "1": [
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:         {
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "devices": [
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "/dev/loop4"
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             ],
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "lv_name": "ceph_lv1",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "lv_size": "21470642176",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "name": "ceph_lv1",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "tags": {
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.cluster_name": "ceph",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.crush_device_class": "",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.encrypted": "0",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.objectstore": "bluestore",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.osd_id": "1",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.type": "block",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.vdo": "0",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.with_tpm": "0"
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             },
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "type": "block",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "vg_name": "ceph_vg1"
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:         }
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:     ],
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:     "2": [
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:         {
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "devices": [
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "/dev/loop5"
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             ],
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "lv_name": "ceph_lv2",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "lv_size": "21470642176",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "name": "ceph_lv2",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "tags": {
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.cluster_name": "ceph",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.crush_device_class": "",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.encrypted": "0",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.objectstore": "bluestore",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.osd_id": "2",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.type": "block",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.vdo": "0",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:                 "ceph.with_tpm": "0"
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             },
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "type": "block",
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:             "vg_name": "ceph_vg2"
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:         }
Jan 29 09:14:02 compute-0 quizzical_golick[92855]:     ]
Jan 29 09:14:02 compute-0 quizzical_golick[92855]: }
Jan 29 09:14:02 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 29 09:14:02 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 29 09:14:02 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 29 09:14:02 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 29 09:14:02 compute-0 ceph-mon[75183]: osdmap e42: 3 total, 3 up, 3 in
Jan 29 09:14:02 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:02 compute-0 ceph-mon[75183]: pgmap v96: 131 pgs: 1 peering, 124 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:02 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 09:14:02 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 09:14:02 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 09:14:02 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 09:14:02 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 29 09:14:02 compute-0 systemd[1]: libpod-f2cc322a88175622f747368ab01d656e606448480dcb3d6496446ba7a350508c.scope: Deactivated successfully.
Jan 29 09:14:02 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 29 09:14:02 compute-0 conmon[92855]: conmon f2cc322a88175622f747 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f2cc322a88175622f747368ab01d656e606448480dcb3d6496446ba7a350508c.scope/container/memory.events
Jan 29 09:14:02 compute-0 podman[92830]: 2026-01-29 09:14:02.476865921 +0000 UTC m=+0.501563878 container died f2cc322a88175622f747368ab01d656e606448480dcb3d6496446ba7a350508c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_golick, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[6.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=43 pruub=15.388912201s) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active pruub 91.221656799s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[6.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=43 pruub=15.388912201s) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown pruub 91.221656799s@ mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.17( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.16( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.15( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=41/43 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.3( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.19( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.6( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe4cf3effbaa625bc7c6beb1ebe131ab6bae0f8d3f10fb2dab34b816c25ba7a6-merged.mount: Deactivated successfully.
Jan 29 09:14:02 compute-0 podman[92830]: 2026-01-29 09:14:02.542110048 +0000 UTC m=+0.566807995 container remove f2cc322a88175622f747368ab01d656e606448480dcb3d6496446ba7a350508c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 09:14:02 compute-0 systemd[1]: libpod-conmon-f2cc322a88175622f747368ab01d656e606448480dcb3d6496446ba7a350508c.scope: Deactivated successfully.
Jan 29 09:14:02 compute-0 sudo[92728]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:02 compute-0 sudo[92910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:14:02 compute-0 sudo[92910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:02 compute-0 sudo[92910]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:02 compute-0 sudo[92935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:14:02 compute-0 sudo[92935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:02 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=29/30 n=0 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=43 pruub=9.044003487s) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active pruub 73.670753479s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:02 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=29/30 n=0 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=43 pruub=9.044003487s) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown pruub 73.670753479s@ mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:02 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 29 09:14:02 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2141248754' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 29 09:14:02 compute-0 cool_grothendieck[92869]: 
Jan 29 09:14:02 compute-0 cool_grothendieck[92869]: {"fsid":"3fdce3ca-565d-5459-88e8-1ffe58b48437","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":144,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":43,"num_osds":3,"num_up_osds":3,"osd_up_since":1769677998,"num_in_osds":3,"osd_in_since":1769677957,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":124},{"state_name":"active+clean","count":6},{"state_name":"peering","count":1}],"num_pgs":131,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84103168,"bytes_avail":64327823360,"bytes_total":64411926528,"unknown_pgs_ratio":0.94656491279602051,"inactive_pgs_ratio":0.0076335878111422062},"fsmap":{"epoch":2,"btime":"2026-01-29T09:13:57:309478+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-29T09:12:57.948210+0000","services":{}},"progress_events":{}}
Jan 29 09:14:02 compute-0 systemd[1]: libpod-7a4d06132ed49bdae215c6bb1c823c597109152e49a86a2d999347342c0f3124.scope: Deactivated successfully.
Jan 29 09:14:02 compute-0 podman[92846]: 2026-01-29 09:14:02.805562893 +0000 UTC m=+0.752819573 container died 7a4d06132ed49bdae215c6bb1c823c597109152e49a86a2d999347342c0f3124 (image=quay.io/ceph/ceph:v20, name=cool_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 29 09:14:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad257338ac2c32c191596b472443c84c108a6e9484f71448e98b8187c26b4f4a-merged.mount: Deactivated successfully.
Jan 29 09:14:02 compute-0 podman[92846]: 2026-01-29 09:14:02.852759108 +0000 UTC m=+0.800015758 container remove 7a4d06132ed49bdae215c6bb1c823c597109152e49a86a2d999347342c0f3124 (image=quay.io/ceph/ceph:v20, name=cool_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:14:02 compute-0 systemd[1]: libpod-conmon-7a4d06132ed49bdae215c6bb1c823c597109152e49a86a2d999347342c0f3124.scope: Deactivated successfully.
Jan 29 09:14:02 compute-0 sudo[92820]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:02 compute-0 sudo[93022]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cadbjrhvcfkxujptgtcsspkktxxgevrt ; /usr/bin/python3'
Jan 29 09:14:03 compute-0 sudo[93022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:14:03 compute-0 podman[92992]: 2026-01-29 09:14:03.010888346 +0000 UTC m=+0.041506847 container create 4189c5441bddda0735c78c8dc38fec134abe95ae891b6b4234bc65ca9c59ef85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 29 09:14:03 compute-0 systemd[1]: Started libpod-conmon-4189c5441bddda0735c78c8dc38fec134abe95ae891b6b4234bc65ca9c59ef85.scope.
Jan 29 09:14:03 compute-0 podman[92992]: 2026-01-29 09:14:02.990076172 +0000 UTC m=+0.020694703 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:14:03 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:03 compute-0 podman[92992]: 2026-01-29 09:14:03.108011058 +0000 UTC m=+0.138629579 container init 4189c5441bddda0735c78c8dc38fec134abe95ae891b6b4234bc65ca9c59ef85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wiles, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:14:03 compute-0 podman[92992]: 2026-01-29 09:14:03.115308819 +0000 UTC m=+0.145927320 container start 4189c5441bddda0735c78c8dc38fec134abe95ae891b6b4234bc65ca9c59ef85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:14:03 compute-0 podman[92992]: 2026-01-29 09:14:03.119449418 +0000 UTC m=+0.150067919 container attach 4189c5441bddda0735c78c8dc38fec134abe95ae891b6b4234bc65ca9c59ef85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 09:14:03 compute-0 fervent_wiles[93029]: 167 167
Jan 29 09:14:03 compute-0 systemd[1]: libpod-4189c5441bddda0735c78c8dc38fec134abe95ae891b6b4234bc65ca9c59ef85.scope: Deactivated successfully.
Jan 29 09:14:03 compute-0 podman[92992]: 2026-01-29 09:14:03.121537762 +0000 UTC m=+0.152156283 container died 4189c5441bddda0735c78c8dc38fec134abe95ae891b6b4234bc65ca9c59ef85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wiles, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 09:14:03 compute-0 python3[93026]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:14:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-a13847183e3be5726c79a0a621e6d9658f0f5c12b4806d5ebd5519817db7ba28-merged.mount: Deactivated successfully.
Jan 29 09:14:03 compute-0 podman[92992]: 2026-01-29 09:14:03.21965966 +0000 UTC m=+0.250278161 container remove 4189c5441bddda0735c78c8dc38fec134abe95ae891b6b4234bc65ca9c59ef85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wiles, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 29 09:14:03 compute-0 systemd[1]: libpod-conmon-4189c5441bddda0735c78c8dc38fec134abe95ae891b6b4234bc65ca9c59ef85.scope: Deactivated successfully.
Jan 29 09:14:03 compute-0 podman[93044]: 2026-01-29 09:14:03.247966211 +0000 UTC m=+0.064354675 container create cf7b580fd6945f383563bffecd449faf16fd1e842a590d6f528c4dcdd9a2ca68 (image=quay.io/ceph/ceph:v20, name=loving_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 29 09:14:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:14:03 compute-0 systemd[1]: Started libpod-conmon-cf7b580fd6945f383563bffecd449faf16fd1e842a590d6f528c4dcdd9a2ca68.scope.
Jan 29 09:14:03 compute-0 podman[93044]: 2026-01-29 09:14:03.229092867 +0000 UTC m=+0.045481351 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:14:03 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40f8adbf85eef21d6e2bac5b86f03080f14fe030271eeb4b6a144e177446d29c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40f8adbf85eef21d6e2bac5b86f03080f14fe030271eeb4b6a144e177446d29c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:03 compute-0 podman[93066]: 2026-01-29 09:14:03.354830388 +0000 UTC m=+0.043371466 container create c95dadc81beb804a0bc7b9d937388cf349f455b6749c80161bb26dfa5d890cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:14:03 compute-0 podman[93044]: 2026-01-29 09:14:03.364146811 +0000 UTC m=+0.180535305 container init cf7b580fd6945f383563bffecd449faf16fd1e842a590d6f528c4dcdd9a2ca68 (image=quay.io/ceph/ceph:v20, name=loving_montalcini, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:14:03 compute-0 podman[93044]: 2026-01-29 09:14:03.369530712 +0000 UTC m=+0.185919176 container start cf7b580fd6945f383563bffecd449faf16fd1e842a590d6f528c4dcdd9a2ca68 (image=quay.io/ceph/ceph:v20, name=loving_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 09:14:03 compute-0 podman[93044]: 2026-01-29 09:14:03.387259616 +0000 UTC m=+0.203648090 container attach cf7b580fd6945f383563bffecd449faf16fd1e842a590d6f528c4dcdd9a2ca68 (image=quay.io/ceph/ceph:v20, name=loving_montalcini, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:14:03 compute-0 systemd[1]: Started libpod-conmon-c95dadc81beb804a0bc7b9d937388cf349f455b6749c80161bb26dfa5d890cdb.scope.
Jan 29 09:14:03 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1e547bc18b16d970ba94380cdb983802ed2f3ab39ca440423835b70e31e624/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1e547bc18b16d970ba94380cdb983802ed2f3ab39ca440423835b70e31e624/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1e547bc18b16d970ba94380cdb983802ed2f3ab39ca440423835b70e31e624/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1e547bc18b16d970ba94380cdb983802ed2f3ab39ca440423835b70e31e624/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:03 compute-0 podman[93066]: 2026-01-29 09:14:03.334037643 +0000 UTC m=+0.022578751 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:14:03 compute-0 podman[93066]: 2026-01-29 09:14:03.445164852 +0000 UTC m=+0.133705950 container init c95dadc81beb804a0bc7b9d937388cf349f455b6749c80161bb26dfa5d890cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 29 09:14:03 compute-0 podman[93066]: 2026-01-29 09:14:03.451166679 +0000 UTC m=+0.139707757 container start c95dadc81beb804a0bc7b9d937388cf349f455b6749c80161bb26dfa5d890cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_poincare, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 29 09:14:03 compute-0 podman[93066]: 2026-01-29 09:14:03.457001452 +0000 UTC m=+0.145542550 container attach c95dadc81beb804a0bc7b9d937388cf349f455b6749c80161bb26dfa5d890cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_poincare, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:14:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 29 09:14:03 compute-0 ceph-mon[75183]: 3.1f scrub starts
Jan 29 09:14:03 compute-0 ceph-mon[75183]: 3.1f scrub ok
Jan 29 09:14:03 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 09:14:03 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 09:14:03 compute-0 ceph-mon[75183]: osdmap e43: 3 total, 3 up, 3 in
Jan 29 09:14:03 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2141248754' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 29 09:14:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.0( empty local-lis/les=43/44 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=43/44 n=0 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 09:14:03 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1006465769' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 09:14:03 compute-0 loving_montalcini[93078]: 
Jan 29 09:14:03 compute-0 loving_montalcini[93078]: {"epoch":1,"fsid":"3fdce3ca-565d-5459-88e8-1ffe58b48437","modified":"2026-01-29T09:11:34.210489Z","created":"2026-01-29T09:11:34.210489Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Jan 29 09:14:03 compute-0 loving_montalcini[93078]: dumped monmap epoch 1
Jan 29 09:14:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v99: 193 pgs: 1 peering, 124 unknown, 68 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:03 compute-0 podman[93044]: 2026-01-29 09:14:03.970221512 +0000 UTC m=+0.786610006 container died cf7b580fd6945f383563bffecd449faf16fd1e842a590d6f528c4dcdd9a2ca68 (image=quay.io/ceph/ceph:v20, name=loving_montalcini, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:14:03 compute-0 systemd[1]: libpod-cf7b580fd6945f383563bffecd449faf16fd1e842a590d6f528c4dcdd9a2ca68.scope: Deactivated successfully.
Jan 29 09:14:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-40f8adbf85eef21d6e2bac5b86f03080f14fe030271eeb4b6a144e177446d29c-merged.mount: Deactivated successfully.
Jan 29 09:14:04 compute-0 podman[93044]: 2026-01-29 09:14:04.01676408 +0000 UTC m=+0.833152544 container remove cf7b580fd6945f383563bffecd449faf16fd1e842a590d6f528c4dcdd9a2ca68 (image=quay.io/ceph/ceph:v20, name=loving_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:14:04 compute-0 systemd[1]: libpod-conmon-cf7b580fd6945f383563bffecd449faf16fd1e842a590d6f528c4dcdd9a2ca68.scope: Deactivated successfully.
Jan 29 09:14:04 compute-0 sudo[93022]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:04 compute-0 lvm[93197]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:14:04 compute-0 lvm[93197]: VG ceph_vg0 finished
Jan 29 09:14:04 compute-0 lvm[93196]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:14:04 compute-0 lvm[93196]: VG ceph_vg1 finished
Jan 29 09:14:04 compute-0 lvm[93199]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:14:04 compute-0 lvm[93199]: VG ceph_vg2 finished
Jan 29 09:14:04 compute-0 interesting_poincare[93087]: {}
Jan 29 09:14:04 compute-0 systemd[1]: libpod-c95dadc81beb804a0bc7b9d937388cf349f455b6749c80161bb26dfa5d890cdb.scope: Deactivated successfully.
Jan 29 09:14:04 compute-0 systemd[1]: libpod-c95dadc81beb804a0bc7b9d937388cf349f455b6749c80161bb26dfa5d890cdb.scope: Consumed 1.328s CPU time.
Jan 29 09:14:04 compute-0 podman[93066]: 2026-01-29 09:14:04.374328428 +0000 UTC m=+1.062869516 container died c95dadc81beb804a0bc7b9d937388cf349f455b6749c80161bb26dfa5d890cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_poincare, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:14:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-db1e547bc18b16d970ba94380cdb983802ed2f3ab39ca440423835b70e31e624-merged.mount: Deactivated successfully.
Jan 29 09:14:04 compute-0 sudo[93230]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldtnodmcaxekdbjwplobzxoxfvhhvupv ; /usr/bin/python3'
Jan 29 09:14:04 compute-0 sudo[93230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:14:04 compute-0 podman[93066]: 2026-01-29 09:14:04.425259751 +0000 UTC m=+1.113800829 container remove c95dadc81beb804a0bc7b9d937388cf349f455b6749c80161bb26dfa5d890cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_poincare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:14:04 compute-0 systemd[1]: libpod-conmon-c95dadc81beb804a0bc7b9d937388cf349f455b6749c80161bb26dfa5d890cdb.scope: Deactivated successfully.
Jan 29 09:14:04 compute-0 sudo[92935]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:04 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:14:04 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:04 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:14:04 compute-0 ceph-mon[75183]: osdmap e44: 3 total, 3 up, 3 in
Jan 29 09:14:04 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1006465769' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 09:14:04 compute-0 ceph-mon[75183]: pgmap v99: 193 pgs: 1 peering, 124 unknown, 68 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:04 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:04 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev 9254cf78-13e5-48d2-8f80-fd06b11d9f80 (Updating mds.cephfs deployment (+1 -> 1))
Jan 29 09:14:04 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.eawrqy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 29 09:14:04 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.eawrqy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 29 09:14:04 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.eawrqy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 29 09:14:04 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:14:04 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:14:04 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.eawrqy on compute-0
Jan 29 09:14:04 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.eawrqy on compute-0
Jan 29 09:14:04 compute-0 python3[93240]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:14:04 compute-0 sudo[93241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:14:04 compute-0 sudo[93241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:04 compute-0 sudo[93241]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:04 compute-0 podman[93247]: 2026-01-29 09:14:04.631539859 +0000 UTC m=+0.045129472 container create 00bb77a30eccaacf979acc25db2420befe78ff62df592dcacc0bbad02e4a8ff1 (image=quay.io/ceph/ceph:v20, name=vigorous_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:14:04 compute-0 sudo[93278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437
Jan 29 09:14:04 compute-0 sudo[93278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:04 compute-0 systemd[1]: Started libpod-conmon-00bb77a30eccaacf979acc25db2420befe78ff62df592dcacc0bbad02e4a8ff1.scope.
Jan 29 09:14:04 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f2515220349765f11a1cbdc586c372860b6249396cc1a45035069b05522e662/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f2515220349765f11a1cbdc586c372860b6249396cc1a45035069b05522e662/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:04 compute-0 podman[93247]: 2026-01-29 09:14:04.614423261 +0000 UTC m=+0.028012894 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:14:04 compute-0 podman[93247]: 2026-01-29 09:14:04.712696173 +0000 UTC m=+0.126285786 container init 00bb77a30eccaacf979acc25db2420befe78ff62df592dcacc0bbad02e4a8ff1 (image=quay.io/ceph/ceph:v20, name=vigorous_poincare, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Jan 29 09:14:04 compute-0 podman[93247]: 2026-01-29 09:14:04.719163722 +0000 UTC m=+0.132753335 container start 00bb77a30eccaacf979acc25db2420befe78ff62df592dcacc0bbad02e4a8ff1 (image=quay.io/ceph/ceph:v20, name=vigorous_poincare, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:14:04 compute-0 podman[93247]: 2026-01-29 09:14:04.722654124 +0000 UTC m=+0.136243757 container attach 00bb77a30eccaacf979acc25db2420befe78ff62df592dcacc0bbad02e4a8ff1 (image=quay.io/ceph/ceph:v20, name=vigorous_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 09:14:05 compute-0 podman[93371]: 2026-01-29 09:14:05.063073093 +0000 UTC m=+0.033338144 container create e921acbc244b567c3bd39f68e5ed422a3f9ce83e9defe8daaa0870acbc3e40a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 09:14:05 compute-0 systemd[1]: Started libpod-conmon-e921acbc244b567c3bd39f68e5ed422a3f9ce83e9defe8daaa0870acbc3e40a4.scope.
Jan 29 09:14:05 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:05 compute-0 podman[93371]: 2026-01-29 09:14:05.136885204 +0000 UTC m=+0.107150265 container init e921acbc244b567c3bd39f68e5ed422a3f9ce83e9defe8daaa0870acbc3e40a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_liskov, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 09:14:05 compute-0 podman[93371]: 2026-01-29 09:14:05.141426923 +0000 UTC m=+0.111691974 container start e921acbc244b567c3bd39f68e5ed422a3f9ce83e9defe8daaa0870acbc3e40a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 29 09:14:05 compute-0 podman[93371]: 2026-01-29 09:14:05.145797108 +0000 UTC m=+0.116062189 container attach e921acbc244b567c3bd39f68e5ed422a3f9ce83e9defe8daaa0870acbc3e40a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_liskov, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 29 09:14:05 compute-0 podman[93371]: 2026-01-29 09:14:05.050171465 +0000 UTC m=+0.020436536 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:14:05 compute-0 heuristic_liskov[93386]: 167 167
Jan 29 09:14:05 compute-0 systemd[1]: libpod-e921acbc244b567c3bd39f68e5ed422a3f9ce83e9defe8daaa0870acbc3e40a4.scope: Deactivated successfully.
Jan 29 09:14:05 compute-0 podman[93371]: 2026-01-29 09:14:05.148337424 +0000 UTC m=+0.118602475 container died e921acbc244b567c3bd39f68e5ed422a3f9ce83e9defe8daaa0870acbc3e40a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_liskov, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 09:14:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8ee82417ea679ea0588cbbb8daa359dabca5cfdc289f5459241f9cce6c945f3-merged.mount: Deactivated successfully.
Jan 29 09:14:05 compute-0 systemd[76570]: Starting Mark boot as successful...
Jan 29 09:14:05 compute-0 podman[93371]: 2026-01-29 09:14:05.195201371 +0000 UTC m=+0.165466422 container remove e921acbc244b567c3bd39f68e5ed422a3f9ce83e9defe8daaa0870acbc3e40a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_liskov, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 29 09:14:05 compute-0 systemd[76570]: Finished Mark boot as successful.
Jan 29 09:14:05 compute-0 systemd[1]: libpod-conmon-e921acbc244b567c3bd39f68e5ed422a3f9ce83e9defe8daaa0870acbc3e40a4.scope: Deactivated successfully.
Jan 29 09:14:05 compute-0 systemd[1]: Reloading.
Jan 29 09:14:05 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Jan 29 09:14:05 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/951207349' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 29 09:14:05 compute-0 vigorous_poincare[93307]: [client.openstack]
Jan 29 09:14:05 compute-0 vigorous_poincare[93307]:         key = AQAdJHtpAAAAABAAaI37n/Z6PSZlO/27IIsTqw==
Jan 29 09:14:05 compute-0 vigorous_poincare[93307]:         caps mgr = "allow *"
Jan 29 09:14:05 compute-0 vigorous_poincare[93307]:         caps mon = "profile rbd"
Jan 29 09:14:05 compute-0 vigorous_poincare[93307]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 29 09:14:05 compute-0 systemd-rc-local-generator[93430]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:14:05 compute-0 podman[93247]: 2026-01-29 09:14:05.315665093 +0000 UTC m=+0.729254706 container died 00bb77a30eccaacf979acc25db2420befe78ff62df592dcacc0bbad02e4a8ff1 (image=quay.io/ceph/ceph:v20, name=vigorous_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Jan 29 09:14:05 compute-0 systemd-sysv-generator[93435]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:14:05 compute-0 systemd[1]: libpod-00bb77a30eccaacf979acc25db2420befe78ff62df592dcacc0bbad02e4a8ff1.scope: Deactivated successfully.
Jan 29 09:14:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f2515220349765f11a1cbdc586c372860b6249396cc1a45035069b05522e662-merged.mount: Deactivated successfully.
Jan 29 09:14:05 compute-0 podman[93247]: 2026-01-29 09:14:05.499556506 +0000 UTC m=+0.913146119 container remove 00bb77a30eccaacf979acc25db2420befe78ff62df592dcacc0bbad02e4a8ff1 (image=quay.io/ceph/ceph:v20, name=vigorous_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 29 09:14:05 compute-0 systemd[1]: libpod-conmon-00bb77a30eccaacf979acc25db2420befe78ff62df592dcacc0bbad02e4a8ff1.scope: Deactivated successfully.
Jan 29 09:14:05 compute-0 sudo[93230]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:05 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:05 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:05 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.eawrqy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 29 09:14:05 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.eawrqy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 29 09:14:05 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:14:05 compute-0 ceph-mon[75183]: Deploying daemon mds.cephfs.compute-0.eawrqy on compute-0
Jan 29 09:14:05 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/951207349' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 29 09:14:05 compute-0 systemd[1]: Reloading.
Jan 29 09:14:05 compute-0 systemd-rc-local-generator[93488]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:14:05 compute-0 systemd-sysv-generator[93491]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:14:05 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.eawrqy for 3fdce3ca-565d-5459-88e8-1ffe58b48437...
Jan 29 09:14:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v100: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:05 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 29 09:14:05 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 09:14:05 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 29 09:14:05 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 09:14:05 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 29 09:14:05 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 09:14:05 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 29 09:14:05 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 09:14:05 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 29 09:14:05 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 09:14:05 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 29 09:14:05 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 09:14:06 compute-0 podman[93546]: 2026-01-29 09:14:06.013016294 +0000 UTC m=+0.045645496 container create 47537243927a6a192a51fa06ec780d1443cb712f745ba0afcbbe80d533a2bcb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mds-cephfs-compute-0-eawrqy, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 29 09:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a316e567816ce64a5abb79f7fa27ad5d8d315f5ab155e660eae10b26c94d32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a316e567816ce64a5abb79f7fa27ad5d8d315f5ab155e660eae10b26c94d32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a316e567816ce64a5abb79f7fa27ad5d8d315f5ab155e660eae10b26c94d32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a316e567816ce64a5abb79f7fa27ad5d8d315f5ab155e660eae10b26c94d32/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.eawrqy supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:06 compute-0 podman[93546]: 2026-01-29 09:14:06.081078245 +0000 UTC m=+0.113707467 container init 47537243927a6a192a51fa06ec780d1443cb712f745ba0afcbbe80d533a2bcb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mds-cephfs-compute-0-eawrqy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Jan 29 09:14:06 compute-0 podman[93546]: 2026-01-29 09:14:06.086567468 +0000 UTC m=+0.119196670 container start 47537243927a6a192a51fa06ec780d1443cb712f745ba0afcbbe80d533a2bcb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mds-cephfs-compute-0-eawrqy, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:14:06 compute-0 podman[93546]: 2026-01-29 09:14:05.991392418 +0000 UTC m=+0.024021650 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:14:06 compute-0 bash[93546]: 47537243927a6a192a51fa06ec780d1443cb712f745ba0afcbbe80d533a2bcb2
Jan 29 09:14:06 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.eawrqy for 3fdce3ca-565d-5459-88e8-1ffe58b48437.
Jan 29 09:14:06 compute-0 ceph-mds[93566]: set uid:gid to 167:167 (ceph:ceph)
Jan 29 09:14:06 compute-0 ceph-mds[93566]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Jan 29 09:14:06 compute-0 ceph-mds[93566]: main not setting numa affinity
Jan 29 09:14:06 compute-0 ceph-mds[93566]: pidfile_write: ignore empty --pid-file
Jan 29 09:14:06 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mds-cephfs-compute-0-eawrqy[93562]: starting mds.cephfs.compute-0.eawrqy at 
Jan 29 09:14:06 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy Updating MDS map to version 2 from mon.0
Jan 29 09:14:06 compute-0 sudo[93278]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:14:06 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:14:06 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 29 09:14:06 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:06 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev 9254cf78-13e5-48d2-8f80-fd06b11d9f80 (Updating mds.cephfs deployment (+1 -> 1))
Jan 29 09:14:06 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event 9254cf78-13e5-48d2-8f80-fd06b11d9f80 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Jan 29 09:14:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Jan 29 09:14:06 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 29 09:14:06 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:06 compute-0 sudo[93585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:14:06 compute-0 sudo[93585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:06 compute-0 sudo[93585]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:06 compute-0 sudo[93610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:14:06 compute-0 sudo[93610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:06 compute-0 sudo[93610]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:06 compute-0 sudo[93635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 29 09:14:06 compute-0 sudo[93635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 29 09:14:06 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 09:14:06 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 09:14:06 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 09:14:06 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 09:14:06 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 09:14:06 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 09:14:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 29 09:14:06 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.871208191s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.814163208s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.871096611s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.814163208s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925963402s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.869056702s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925905228s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869056702s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926049232s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.869293213s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926024437s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869293213s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870428085s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.813789368s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870413780s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813789368s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870702744s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.814125061s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870686531s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.814125061s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870272636s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.813743591s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870223999s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813743591s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870153427s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.813697815s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870141029s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813697815s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925490379s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.869110107s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925471306s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869110107s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870522499s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.814178467s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870504379s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.814178467s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925465584s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.869155884s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925437927s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869155884s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926127434s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.869911194s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869931221s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.813735962s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926114082s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869911194s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869914055s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813735962s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926186562s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.870048523s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926169395s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870048523s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926018715s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.869964600s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925992012s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869964600s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869600296s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.813606262s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926094055s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.870071411s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926009178s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870071411s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869582176s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813606262s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869405746s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.813545227s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869391441s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813545227s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).mds e3 new map
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925883293s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.870101929s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2026-01-29T09:14:06:531815+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-29T09:13:57.309237+0000
                                           modified        2026-01-29T09:13:57.309237+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.eawrqy{-1:14244} state up:standby seq 1 addr [v2:192.168.122.100:6814/925034906,v1:192.168.122.100:6815/925034906] compat {c=[1],r=[1],i=[1fff]}]
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925864220s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870101929s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869065285s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.813446045s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869050980s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813446045s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-mon[75183]: pgmap v100: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:06 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 09:14:06 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 09:14:06 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 09:14:06 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 09:14:06 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 09:14:06 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 09:14:06 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:06 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:06 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:06 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:06 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925468445s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.870857239s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925449371s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870857239s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867900848s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.813331604s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867880821s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813331604s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867644310s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.813186646s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867627144s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813186646s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859783173s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.805412292s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859755516s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805412292s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925496101s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.871192932s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925479889s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871192932s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925609589s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.871208191s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859578133s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.805381775s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859561920s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805381775s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925427437s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.871307373s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925411224s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871307373s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867284775s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.813247681s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925439835s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871208191s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867267609s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813247681s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925297737s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.871330261s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925284386s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871330261s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859183311s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.805305481s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925175667s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.871322632s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859185219s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.805335999s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859161377s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805305481s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925143242s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871322632s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859100342s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805335999s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925184250s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.871467590s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858718872s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.805030823s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925166130s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871467590s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858699799s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805030823s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858575821s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.804977417s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858903885s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.805351257s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858885765s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805351257s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925213814s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.871688843s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925176620s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871688843s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858347893s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.804908752s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858330727s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.804908752s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858257294s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.804893494s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925024033s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.871688843s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858238220s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.804893494s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858558655s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.804977417s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925005913s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871688843s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.924736977s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.871589661s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.924706459s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871589661s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.853088379s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.800247192s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.853067398s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.800247192s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.924295425s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.871582031s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.924279213s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871582031s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy Updating MDS map to version 3 from mon.0
Jan 29 09:14:06 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy Monitors have assigned me to become a standby
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003906250s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.418182373s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003882408s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.418182373s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841853142s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.256263733s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841838837s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256263733s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003689766s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.418220520s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003646851s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.418220520s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841897964s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.256576538s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841745377s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.256446838s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003516197s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.418243408s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859436035s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.274192810s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859423637s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.274192810s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841225624s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.256187439s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841215134s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256187439s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841701508s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256446838s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841834068s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256576538s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841054916s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.256141663s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841039658s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256141663s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005534172s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.420669556s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005519867s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.420669556s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003029823s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.418243408s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840822220s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.256080627s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840955734s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.256217957s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840809822s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256080627s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840927124s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256217957s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005935669s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.421333313s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840617180s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.256027222s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005921364s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421333313s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840600014s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256027222s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005681992s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.421249390s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005669594s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421249390s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005396843s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.421043396s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005373001s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421043396s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005716324s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.421417236s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005702019s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421417236s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005743980s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.421562195s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005726814s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421562195s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/925034906,v1:192.168.122.100:6815/925034906] up:boot
Jan 29 09:14:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/925034906,v1:192.168.122.100:6815/925034906] as mds.0
Jan 29 09:14:06 compute-0 ceph-mon[75183]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.eawrqy assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839838028s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255844116s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005626678s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.421653748s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839819908s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255844116s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-mon[75183]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005608559s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421653748s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839775085s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255844116s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839753151s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255844116s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005732536s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.421928406s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839701653s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255912781s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-mon[75183]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005712509s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421928406s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839684486s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255912781s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839718819s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255989075s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839698792s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255989075s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839388847s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255767822s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004714966s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.421127319s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839371681s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255767822s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-mon[75183]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004692078s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421127319s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005585670s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.422073364s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005562782s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422073364s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839193344s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255752563s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005764961s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.422416687s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839136124s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255752563s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005743980s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422416687s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005180359s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.422157288s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005159378s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422157288s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.838771820s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255821228s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.838751793s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255821228s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.838637352s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255821228s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005061150s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.422256470s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.838616371s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255821228s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005044937s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422256470s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004891396s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.422325134s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004875183s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422325134s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837707520s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255317688s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837686539s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255317688s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004628181s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.422424316s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837920189s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255729675s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837900162s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255729675s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004602432s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422424316s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004592896s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.422576904s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837324142s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255317688s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004566193s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422576904s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004714966s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.422737122s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837303162s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255317688s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004692078s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422737122s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837179184s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255332947s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837144852s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255332947s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.917540550s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.838790894s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.917479515s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838790894s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963941574s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.885498047s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963922501s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.885498047s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963768005s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.885505676s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963748932s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.885505676s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916956902s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.838867188s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964960098s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.886856079s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916940689s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838867188s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964897156s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886856079s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916765213s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.838905334s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964747429s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.886917114s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916724205s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838905334s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964704514s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886917114s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916565895s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.838989258s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916548729s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838989258s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916506767s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.838989258s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916488647s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838989258s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916416168s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.839012146s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916399002s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839012146s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964230537s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.886940002s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964209557s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886940002s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916178703s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.839042664s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964348793s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.887237549s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916154861s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839042664s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964330673s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887237549s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963886261s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.886901855s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916011810s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.839080811s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963843346s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886901855s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915985107s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839080811s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964128494s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.887260437s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964110374s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887260437s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964040756s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.887283325s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964025497s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887283325s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963943481s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.887313843s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963914871s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887313843s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915515900s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.838935852s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915501595s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838935852s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915622711s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.839096069s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915602684s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839096069s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915547371s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.839103699s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963695526s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.887374878s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963669777s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887374878s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990222931s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.913986206s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915362358s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.839141846s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990189552s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.913986206s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915343285s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839141846s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915319443s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.839187622s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915301323s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839187622s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990167618s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.914100647s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990150452s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914100647s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915018082s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839103699s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915086746s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.839225769s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915067673s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839225769s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.914976120s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.839225769s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.914950371s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839225769s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921164513s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.845474243s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921147346s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845474243s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990023613s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.914367676s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990005493s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914367676s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921440125s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.845909119s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921417236s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845909119s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990056038s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.914581299s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990031242s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914581299s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.920965195s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.845588684s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.920948982s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845588684s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921165466s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.845848083s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921145439s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845848083s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989798546s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.914543152s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989780426s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914543152s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921191216s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.845970154s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.992457390s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.917366028s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989642143s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.914573669s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921034813s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845970154s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989624977s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914573669s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.992428780s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.917366028s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989519119s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.914604187s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989503860s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914604187s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.eawrqy"} v 0)
Jan 29 09:14:06 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.eawrqy"} : dispatch
Jan 29 09:14:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).mds e3 all = 0
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).mds e4 new map
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2026-01-29T09:14:06:563859+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-29T09:13:57.309237+0000
                                           modified        2026-01-29T09:14:06.563848+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14244}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-0.eawrqy{0:14244} state up:creating seq 1 addr [v2:192.168.122.100:6814/925034906,v1:192.168.122.100:6815/925034906] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy Updating MDS map to version 4 from mon.0
Jan 29 09:14:06 compute-0 ceph-mds[93566]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 29 09:14:06 compute-0 ceph-mds[93566]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-mds[93566]: mds.0.cache creating system inode with ino:0x1
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-mds[93566]: mds.0.cache creating system inode with ino:0x100
Jan 29 09:14:06 compute-0 ceph-mds[93566]: mds.0.cache creating system inode with ino:0x600
Jan 29 09:14:06 compute-0 ceph-mds[93566]: mds.0.cache creating system inode with ino:0x601
Jan 29 09:14:06 compute-0 ceph-mds[93566]: mds.0.cache creating system inode with ino:0x602
Jan 29 09:14:06 compute-0 ceph-mds[93566]: mds.0.cache creating system inode with ino:0x603
Jan 29 09:14:06 compute-0 ceph-mds[93566]: mds.0.cache creating system inode with ino:0x604
Jan 29 09:14:06 compute-0 ceph-mds[93566]: mds.0.cache creating system inode with ino:0x605
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-mds[93566]: mds.0.cache creating system inode with ino:0x606
Jan 29 09:14:06 compute-0 ceph-mds[93566]: mds.0.cache creating system inode with ino:0x607
Jan 29 09:14:06 compute-0 ceph-mds[93566]: mds.0.cache creating system inode with ino:0x608
Jan 29 09:14:06 compute-0 ceph-mds[93566]: mds.0.cache creating system inode with ino:0x609
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.eawrqy=up:creating}
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:14:06 compute-0 ceph-mgr[75473]: [progress INFO root] Writing back 10 completed events
Jan 29 09:14:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 29 09:14:06 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:06 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 29 09:14:06 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 29 09:14:06 compute-0 podman[93719]: 2026-01-29 09:14:06.856323254 +0000 UTC m=+0.081765121 container exec 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 09:14:06 compute-0 podman[93719]: 2026-01-29 09:14:06.976278813 +0000 UTC m=+0.201720670 container exec_died 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 29 09:14:07 compute-0 sudo[93920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyuiehvgnazminykbqiglxwtmtsweblw ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769678046.7837863-36914-47761377929548/async_wrapper.py j99391413346 30 /home/zuul/.ansible/tmp/ansible-tmp-1769678046.7837863-36914-47761377929548/AnsiballZ_command.py _'
Jan 29 09:14:07 compute-0 sudo[93920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:14:07 compute-0 ansible-async_wrapper.py[93925]: Invoked with j99391413346 30 /home/zuul/.ansible/tmp/ansible-tmp-1769678046.7837863-36914-47761377929548/AnsiballZ_command.py _
Jan 29 09:14:07 compute-0 ansible-async_wrapper.py[93964]: Starting module and watcher
Jan 29 09:14:07 compute-0 ansible-async_wrapper.py[93964]: Start watching 93966 (30)
Jan 29 09:14:07 compute-0 ansible-async_wrapper.py[93966]: Start module (93966)
Jan 29 09:14:07 compute-0 ansible-async_wrapper.py[93925]: Return async_wrapper task started.
Jan 29 09:14:07 compute-0 sudo[93920]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:07 compute-0 python3[93967]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:14:07 compute-0 podman[94017]: 2026-01-29 09:14:07.526179333 +0000 UTC m=+0.042438311 container create 83252a5d71c7a9a5a1edc8f32314ef3722ae0d50b3796c89d2a9bf2bc0a91b9a (image=quay.io/ceph/ceph:v20, name=friendly_pascal, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:14:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 29 09:14:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 29 09:14:07 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.11( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1f( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.12( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.14( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.13( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.15( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 09:14:07 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 09:14:07 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 09:14:07 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 09:14:07 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 09:14:07 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 09:14:07 compute-0 ceph-mon[75183]: osdmap e45: 3 total, 3 up, 3 in
Jan 29 09:14:07 compute-0 ceph-mon[75183]: mds.? [v2:192.168.122.100:6814/925034906,v1:192.168.122.100:6815/925034906] up:boot
Jan 29 09:14:07 compute-0 ceph-mon[75183]: daemon mds.cephfs.compute-0.eawrqy assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 29 09:14:07 compute-0 ceph-mon[75183]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 29 09:14:07 compute-0 ceph-mon[75183]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 29 09:14:07 compute-0 ceph-mon[75183]: Cluster is now healthy
Jan 29 09:14:07 compute-0 ceph-mon[75183]: fsmap cephfs:0 1 up:standby
Jan 29 09:14:07 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.eawrqy"} : dispatch
Jan 29 09:14:07 compute-0 ceph-mon[75183]: fsmap cephfs:1 {0=cephfs.compute-0.eawrqy=up:creating}
Jan 29 09:14:07 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:07 compute-0 systemd[1]: Started libpod-conmon-83252a5d71c7a9a5a1edc8f32314ef3722ae0d50b3796c89d2a9bf2bc0a91b9a.scope.
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.f( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1a( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.18( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.e( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.a( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.8( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.15( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.14( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.13( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.11( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.11( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.13( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1c( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.1f( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1e( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.d( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.f( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.4( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.7( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.5( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.9( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.b( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.2( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.13( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.15( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.17( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.1b( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.16( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.9( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.8( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.a( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.b( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.f( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.3( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.6( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1f( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.2( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.5( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.3( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.2( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.f( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1c( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.6( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.4( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.18( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.9( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.7( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.3( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1d( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.c( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.f( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1b( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.1f( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.18( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.4( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.1e( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.19( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.18( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.16( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.11( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1b( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.15( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.11( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.a( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1c( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.e( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.8( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.2( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.5( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.5( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.7( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.8( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.c( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.e( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1a( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.1d( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.1e( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.8( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.17( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.12( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.10( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1d( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1c( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.14( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.1b( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.13( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.12( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.11( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.15( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.9( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.a( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.d( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.16( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.3( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.4( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.17( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.7( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.6( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.9( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.f( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1a( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1d( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.19( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.c( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.18( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:14:07 compute-0 podman[94017]: 2026-01-29 09:14:07.507211177 +0000 UTC m=+0.023470175 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:14:07 compute-0 ceph-mds[93566]: mds.0.4 creating_done
Jan 29 09:14:07 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:07 compute-0 ceph-mon[75183]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.eawrqy is now active in filesystem cephfs as rank 0
Jan 29 09:14:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8af4c4401fa21d55966f69988c9c1c1e3a1ce3a32c7207274dc4ae61c8002d7d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8af4c4401fa21d55966f69988c9c1c1e3a1ce3a32c7207274dc4ae61c8002d7d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:07 compute-0 podman[94017]: 2026-01-29 09:14:07.626482258 +0000 UTC m=+0.142741256 container init 83252a5d71c7a9a5a1edc8f32314ef3722ae0d50b3796c89d2a9bf2bc0a91b9a (image=quay.io/ceph/ceph:v20, name=friendly_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:14:07 compute-0 podman[94017]: 2026-01-29 09:14:07.633260486 +0000 UTC m=+0.149519464 container start 83252a5d71c7a9a5a1edc8f32314ef3722ae0d50b3796c89d2a9bf2bc0a91b9a (image=quay.io/ceph/ceph:v20, name=friendly_pascal, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:14:07 compute-0 podman[94017]: 2026-01-29 09:14:07.638325278 +0000 UTC m=+0.154584466 container attach 83252a5d71c7a9a5a1edc8f32314ef3722ae0d50b3796c89d2a9bf2bc0a91b9a (image=quay.io/ceph/ceph:v20, name=friendly_pascal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:14:07 compute-0 sudo[93635]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:14:07 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:14:07 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:14:07 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:14:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:14:07 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:14:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:14:07 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:14:07 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:14:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:14:07 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:14:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:14:07 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:14:07 compute-0 sudo[94053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:14:07 compute-0 sudo[94053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:07 compute-0 sudo[94053]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:07 compute-0 sudo[94097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:14:07 compute-0 sudo[94097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:07 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 29 09:14:07 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 29 09:14:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:08 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 29 09:14:08 compute-0 friendly_pascal[94049]: 
Jan 29 09:14:08 compute-0 friendly_pascal[94049]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 29 09:14:08 compute-0 systemd[1]: libpod-83252a5d71c7a9a5a1edc8f32314ef3722ae0d50b3796c89d2a9bf2bc0a91b9a.scope: Deactivated successfully.
Jan 29 09:14:08 compute-0 podman[94017]: 2026-01-29 09:14:08.117571921 +0000 UTC m=+0.633830899 container died 83252a5d71c7a9a5a1edc8f32314ef3722ae0d50b3796c89d2a9bf2bc0a91b9a (image=quay.io/ceph/ceph:v20, name=friendly_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 29 09:14:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-8af4c4401fa21d55966f69988c9c1c1e3a1ce3a32c7207274dc4ae61c8002d7d-merged.mount: Deactivated successfully.
Jan 29 09:14:08 compute-0 podman[94135]: 2026-01-29 09:14:08.163685677 +0000 UTC m=+0.058190334 container create 2de2f58ebb42b28cf911e804af1a5287a34d9fa622846e6b9c7cdd09d6c7cee6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shtern, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:14:08 compute-0 podman[94017]: 2026-01-29 09:14:08.171182184 +0000 UTC m=+0.687441162 container remove 83252a5d71c7a9a5a1edc8f32314ef3722ae0d50b3796c89d2a9bf2bc0a91b9a (image=quay.io/ceph/ceph:v20, name=friendly_pascal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 29 09:14:08 compute-0 ansible-async_wrapper.py[93966]: Module complete (93966)
Jan 29 09:14:08 compute-0 systemd[1]: Started libpod-conmon-2de2f58ebb42b28cf911e804af1a5287a34d9fa622846e6b9c7cdd09d6c7cee6.scope.
Jan 29 09:14:08 compute-0 systemd[1]: libpod-conmon-83252a5d71c7a9a5a1edc8f32314ef3722ae0d50b3796c89d2a9bf2bc0a91b9a.scope: Deactivated successfully.
Jan 29 09:14:08 compute-0 podman[94135]: 2026-01-29 09:14:08.13705082 +0000 UTC m=+0.031555497 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:14:08 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:08 compute-0 podman[94135]: 2026-01-29 09:14:08.251613059 +0000 UTC m=+0.146117746 container init 2de2f58ebb42b28cf911e804af1a5287a34d9fa622846e6b9c7cdd09d6c7cee6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 09:14:08 compute-0 podman[94135]: 2026-01-29 09:14:08.25777554 +0000 UTC m=+0.152280197 container start 2de2f58ebb42b28cf911e804af1a5287a34d9fa622846e6b9c7cdd09d6c7cee6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:14:08 compute-0 sharp_shtern[94162]: 167 167
Jan 29 09:14:08 compute-0 systemd[1]: libpod-2de2f58ebb42b28cf911e804af1a5287a34d9fa622846e6b9c7cdd09d6c7cee6.scope: Deactivated successfully.
Jan 29 09:14:08 compute-0 podman[94135]: 2026-01-29 09:14:08.267248808 +0000 UTC m=+0.161753485 container attach 2de2f58ebb42b28cf911e804af1a5287a34d9fa622846e6b9c7cdd09d6c7cee6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shtern, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:14:08 compute-0 podman[94135]: 2026-01-29 09:14:08.268370647 +0000 UTC m=+0.162875304 container died 2de2f58ebb42b28cf911e804af1a5287a34d9fa622846e6b9c7cdd09d6c7cee6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:14:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:14:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9be1a71c8299e9006c942e32599ae015afa3fd14671fb8edaa77fa192e13e20-merged.mount: Deactivated successfully.
Jan 29 09:14:08 compute-0 podman[94135]: 2026-01-29 09:14:08.399992852 +0000 UTC m=+0.294497509 container remove 2de2f58ebb42b28cf911e804af1a5287a34d9fa622846e6b9c7cdd09d6c7cee6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shtern, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:14:08 compute-0 systemd[1]: libpod-conmon-2de2f58ebb42b28cf911e804af1a5287a34d9fa622846e6b9c7cdd09d6c7cee6.scope: Deactivated successfully.
Jan 29 09:14:08 compute-0 sudo[94227]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iffxsxlkrtzbwhtedrgsykdzrwurqxsj ; /usr/bin/python3'
Jan 29 09:14:08 compute-0 sudo[94227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:14:08 compute-0 podman[94235]: 2026-01-29 09:14:08.586415581 +0000 UTC m=+0.075727343 container create 65cb7f3820f0f5ea609300c6699f10137a4e6e9bf36d4c3107b2dba6c057889c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Jan 29 09:14:08 compute-0 python3[94229]: ansible-ansible.legacy.async_status Invoked with jid=j99391413346.93925 mode=status _async_dir=/root/.ansible_async
Jan 29 09:14:08 compute-0 sudo[94227]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:08 compute-0 podman[94235]: 2026-01-29 09:14:08.537362387 +0000 UTC m=+0.026674149 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:14:08 compute-0 ceph-mon[75183]: 5.1c scrub starts
Jan 29 09:14:08 compute-0 ceph-mon[75183]: 5.1c scrub ok
Jan 29 09:14:08 compute-0 ceph-mon[75183]: osdmap e46: 3 total, 3 up, 3 in
Jan 29 09:14:08 compute-0 ceph-mon[75183]: daemon mds.cephfs.compute-0.eawrqy is now active in filesystem cephfs as rank 0
Jan 29 09:14:08 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:08 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:08 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:14:08 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:14:08 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:08 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:14:08 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:14:08 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:14:08 compute-0 ceph-mon[75183]: pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:08 compute-0 ceph-mon[75183]: from='client.14246 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 29 09:14:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).mds e5 new map
Jan 29 09:14:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2026-01-29T09:14:08:598338+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-29T09:13:57.309237+0000
                                           modified        2026-01-29T09:14:08.598336+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14244}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 14244 members: 14244
                                           [mds.cephfs.compute-0.eawrqy{0:14244} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/925034906,v1:192.168.122.100:6815/925034906] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 29 09:14:08 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy Updating MDS map to version 5 from mon.0
Jan 29 09:14:08 compute-0 ceph-mds[93566]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 29 09:14:08 compute-0 ceph-mds[93566]: mds.0.4 handle_mds_map state change up:creating --> up:active
Jan 29 09:14:08 compute-0 ceph-mds[93566]: mds.0.4 recovery_done -- successful recovery!
Jan 29 09:14:08 compute-0 ceph-mds[93566]: mds.0.4 active_start
Jan 29 09:14:08 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/925034906,v1:192.168.122.100:6815/925034906] up:active
Jan 29 09:14:08 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.eawrqy=up:active}
Jan 29 09:14:08 compute-0 systemd[1]: Started libpod-conmon-65cb7f3820f0f5ea609300c6699f10137a4e6e9bf36d4c3107b2dba6c057889c.scope.
Jan 29 09:14:08 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96b3792e94f99b49e414d7bcf326b2af1d30f9a38289421414b1b21dc3f55486/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96b3792e94f99b49e414d7bcf326b2af1d30f9a38289421414b1b21dc3f55486/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96b3792e94f99b49e414d7bcf326b2af1d30f9a38289421414b1b21dc3f55486/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96b3792e94f99b49e414d7bcf326b2af1d30f9a38289421414b1b21dc3f55486/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96b3792e94f99b49e414d7bcf326b2af1d30f9a38289421414b1b21dc3f55486/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:08 compute-0 sudo[94306]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibpvaimseqadugjotutmclvmnngypyon ; /usr/bin/python3'
Jan 29 09:14:08 compute-0 sudo[94306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:14:08 compute-0 podman[94235]: 2026-01-29 09:14:08.751342317 +0000 UTC m=+0.240654089 container init 65cb7f3820f0f5ea609300c6699f10137a4e6e9bf36d4c3107b2dba6c057889c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:14:08 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 29 09:14:08 compute-0 podman[94235]: 2026-01-29 09:14:08.759983083 +0000 UTC m=+0.249294835 container start 65cb7f3820f0f5ea609300c6699f10137a4e6e9bf36d4c3107b2dba6c057889c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:14:08 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 29 09:14:08 compute-0 podman[94235]: 2026-01-29 09:14:08.788425667 +0000 UTC m=+0.277737449 container attach 65cb7f3820f0f5ea609300c6699f10137a4e6e9bf36d4c3107b2dba6c057889c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hermann, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:14:08 compute-0 python3[94308]: ansible-ansible.legacy.async_status Invoked with jid=j99391413346.93925 mode=cleanup _async_dir=/root/.ansible_async
Jan 29 09:14:08 compute-0 sudo[94306]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:09 compute-0 upbeat_hermann[94283]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:14:09 compute-0 upbeat_hermann[94283]: --> All data devices are unavailable
Jan 29 09:14:09 compute-0 systemd[1]: libpod-65cb7f3820f0f5ea609300c6699f10137a4e6e9bf36d4c3107b2dba6c057889c.scope: Deactivated successfully.
Jan 29 09:14:09 compute-0 podman[94235]: 2026-01-29 09:14:09.198884199 +0000 UTC m=+0.688195971 container died 65cb7f3820f0f5ea609300c6699f10137a4e6e9bf36d4c3107b2dba6c057889c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 29 09:14:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-96b3792e94f99b49e414d7bcf326b2af1d30f9a38289421414b1b21dc3f55486-merged.mount: Deactivated successfully.
Jan 29 09:14:09 compute-0 podman[94235]: 2026-01-29 09:14:09.334979841 +0000 UTC m=+0.824291593 container remove 65cb7f3820f0f5ea609300c6699f10137a4e6e9bf36d4c3107b2dba6c057889c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hermann, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 09:14:09 compute-0 systemd[1]: libpod-conmon-65cb7f3820f0f5ea609300c6699f10137a4e6e9bf36d4c3107b2dba6c057889c.scope: Deactivated successfully.
Jan 29 09:14:09 compute-0 sudo[94361]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-actdluzjdbmohjmudhtbkjvagldbzmrx ; /usr/bin/python3'
Jan 29 09:14:09 compute-0 sudo[94361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:14:09 compute-0 sudo[94097]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:09 compute-0 sudo[94364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:14:09 compute-0 sudo[94364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:09 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 29 09:14:09 compute-0 sudo[94364]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:09 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 29 09:14:09 compute-0 sudo[94389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:14:09 compute-0 sudo[94389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:09 compute-0 python3[94363]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:14:09 compute-0 podman[94414]: 2026-01-29 09:14:09.6142702 +0000 UTC m=+0.045744838 container create 3662e54a6d707d9e244c78ba436142e2098d869f15d5ad3fd5909b5b5f7889aa (image=quay.io/ceph/ceph:v20, name=competent_tharp, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 29 09:14:09 compute-0 ceph-mon[75183]: 5.1f scrub starts
Jan 29 09:14:09 compute-0 ceph-mon[75183]: 5.1f scrub ok
Jan 29 09:14:09 compute-0 ceph-mon[75183]: mds.? [v2:192.168.122.100:6814/925034906,v1:192.168.122.100:6815/925034906] up:active
Jan 29 09:14:09 compute-0 ceph-mon[75183]: fsmap cephfs:1 {0=cephfs.compute-0.eawrqy=up:active}
Jan 29 09:14:09 compute-0 ceph-mon[75183]: 5.10 scrub starts
Jan 29 09:14:09 compute-0 ceph-mon[75183]: 5.10 scrub ok
Jan 29 09:14:09 compute-0 systemd[1]: Started libpod-conmon-3662e54a6d707d9e244c78ba436142e2098d869f15d5ad3fd5909b5b5f7889aa.scope.
Jan 29 09:14:09 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389c9666b33e0150d2e03736bbdb5c708ad88e2d80cebd87da4ee83d68a7a3cd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389c9666b33e0150d2e03736bbdb5c708ad88e2d80cebd87da4ee83d68a7a3cd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:09 compute-0 podman[94414]: 2026-01-29 09:14:09.594430411 +0000 UTC m=+0.025905069 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:14:09 compute-0 podman[94414]: 2026-01-29 09:14:09.700552128 +0000 UTC m=+0.132026786 container init 3662e54a6d707d9e244c78ba436142e2098d869f15d5ad3fd5909b5b5f7889aa (image=quay.io/ceph/ceph:v20, name=competent_tharp, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:14:09 compute-0 podman[94414]: 2026-01-29 09:14:09.708356973 +0000 UTC m=+0.139831611 container start 3662e54a6d707d9e244c78ba436142e2098d869f15d5ad3fd5909b5b5f7889aa (image=quay.io/ceph/ceph:v20, name=competent_tharp, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 09:14:09 compute-0 podman[94414]: 2026-01-29 09:14:09.712903542 +0000 UTC m=+0.144378200 container attach 3662e54a6d707d9e244c78ba436142e2098d869f15d5ad3fd5909b5b5f7889aa (image=quay.io/ceph/ceph:v20, name=competent_tharp, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 09:14:09 compute-0 podman[94447]: 2026-01-29 09:14:09.793715426 +0000 UTC m=+0.040288885 container create a693ec1fa2d6d4ce71f5b4d9e577f6e1109eca7a42819435ebb7cfcb1cc8c03a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 29 09:14:09 compute-0 systemd[1]: Started libpod-conmon-a693ec1fa2d6d4ce71f5b4d9e577f6e1109eca7a42819435ebb7cfcb1cc8c03a.scope.
Jan 29 09:14:09 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:09 compute-0 podman[94447]: 2026-01-29 09:14:09.775418878 +0000 UTC m=+0.021992367 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:14:09 compute-0 podman[94447]: 2026-01-29 09:14:09.870978848 +0000 UTC m=+0.117552327 container init a693ec1fa2d6d4ce71f5b4d9e577f6e1109eca7a42819435ebb7cfcb1cc8c03a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Jan 29 09:14:09 compute-0 podman[94447]: 2026-01-29 09:14:09.876788551 +0000 UTC m=+0.123362020 container start a693ec1fa2d6d4ce71f5b4d9e577f6e1109eca7a42819435ebb7cfcb1cc8c03a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 09:14:09 compute-0 amazing_jepsen[94480]: 167 167
Jan 29 09:14:09 compute-0 systemd[1]: libpod-a693ec1fa2d6d4ce71f5b4d9e577f6e1109eca7a42819435ebb7cfcb1cc8c03a.scope: Deactivated successfully.
Jan 29 09:14:09 compute-0 conmon[94480]: conmon a693ec1fa2d6d4ce71f5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a693ec1fa2d6d4ce71f5b4d9e577f6e1109eca7a42819435ebb7cfcb1cc8c03a.scope/container/memory.events
Jan 29 09:14:09 compute-0 podman[94447]: 2026-01-29 09:14:09.882985443 +0000 UTC m=+0.129558952 container attach a693ec1fa2d6d4ce71f5b4d9e577f6e1109eca7a42819435ebb7cfcb1cc8c03a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:14:09 compute-0 podman[94447]: 2026-01-29 09:14:09.883401814 +0000 UTC m=+0.129975283 container died a693ec1fa2d6d4ce71f5b4d9e577f6e1109eca7a42819435ebb7cfcb1cc8c03a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:14:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-3791215327f0fa3ddaf6bd89bf4ac13678dbd756db0d8829de35d83d37bab26d-merged.mount: Deactivated successfully.
Jan 29 09:14:09 compute-0 podman[94447]: 2026-01-29 09:14:09.931261046 +0000 UTC m=+0.177834515 container remove a693ec1fa2d6d4ce71f5b4d9e577f6e1109eca7a42819435ebb7cfcb1cc8c03a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 29 09:14:09 compute-0 systemd[1]: libpod-conmon-a693ec1fa2d6d4ce71f5b4d9e577f6e1109eca7a42819435ebb7cfcb1cc8c03a.scope: Deactivated successfully.
Jan 29 09:14:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s wr, 6 op/s
Jan 29 09:14:10 compute-0 podman[94504]: 2026-01-29 09:14:10.088267175 +0000 UTC m=+0.069833449 container create a76891cb71f0b3d354013434c75af62c383863ca438a075840b480879ed13b6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 29 09:14:10 compute-0 podman[94504]: 2026-01-29 09:14:10.037311642 +0000 UTC m=+0.018877936 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:14:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 29 09:14:10 compute-0 competent_tharp[94431]: 
Jan 29 09:14:10 compute-0 competent_tharp[94431]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 29 09:14:10 compute-0 systemd[1]: libpod-3662e54a6d707d9e244c78ba436142e2098d869f15d5ad3fd5909b5b5f7889aa.scope: Deactivated successfully.
Jan 29 09:14:10 compute-0 podman[94414]: 2026-01-29 09:14:10.310497801 +0000 UTC m=+0.741972429 container died 3662e54a6d707d9e244c78ba436142e2098d869f15d5ad3fd5909b5b5f7889aa (image=quay.io/ceph/ceph:v20, name=competent_tharp, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 29 09:14:10 compute-0 systemd[1]: Started libpod-conmon-a76891cb71f0b3d354013434c75af62c383863ca438a075840b480879ed13b6a.scope.
Jan 29 09:14:10 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ecccb040a73b3a4c475710dba640a7fe8b217d9518943b1f646f9ac2102ea2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ecccb040a73b3a4c475710dba640a7fe8b217d9518943b1f646f9ac2102ea2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ecccb040a73b3a4c475710dba640a7fe8b217d9518943b1f646f9ac2102ea2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ecccb040a73b3a4c475710dba640a7fe8b217d9518943b1f646f9ac2102ea2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-389c9666b33e0150d2e03736bbdb5c708ad88e2d80cebd87da4ee83d68a7a3cd-merged.mount: Deactivated successfully.
Jan 29 09:14:10 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 29 09:14:10 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 29 09:14:10 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 29 09:14:10 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 29 09:14:10 compute-0 ceph-mon[75183]: 7.1e scrub starts
Jan 29 09:14:10 compute-0 ceph-mon[75183]: 7.1e scrub ok
Jan 29 09:14:10 compute-0 ceph-mon[75183]: pgmap v104: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s wr, 6 op/s
Jan 29 09:14:10 compute-0 podman[94520]: 2026-01-29 09:14:10.961457447 +0000 UTC m=+0.745712857 container remove 3662e54a6d707d9e244c78ba436142e2098d869f15d5ad3fd5909b5b5f7889aa (image=quay.io/ceph/ceph:v20, name=competent_tharp, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 09:14:10 compute-0 systemd[1]: libpod-conmon-3662e54a6d707d9e244c78ba436142e2098d869f15d5ad3fd5909b5b5f7889aa.scope: Deactivated successfully.
Jan 29 09:14:10 compute-0 sudo[94361]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:11 compute-0 podman[94504]: 2026-01-29 09:14:11.001805292 +0000 UTC m=+0.983371586 container init a76891cb71f0b3d354013434c75af62c383863ca438a075840b480879ed13b6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_robinson, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 29 09:14:11 compute-0 podman[94504]: 2026-01-29 09:14:11.010467445 +0000 UTC m=+0.992033719 container start a76891cb71f0b3d354013434c75af62c383863ca438a075840b480879ed13b6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_robinson, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:14:11 compute-0 podman[94504]: 2026-01-29 09:14:11.020562057 +0000 UTC m=+1.002128331 container attach a76891cb71f0b3d354013434c75af62c383863ca438a075840b480879ed13b6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_robinson, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]: {
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:     "0": [
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:         {
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "devices": [
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "/dev/loop3"
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             ],
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "lv_name": "ceph_lv0",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "lv_size": "21470642176",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "name": "ceph_lv0",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "tags": {
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.cluster_name": "ceph",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.crush_device_class": "",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.encrypted": "0",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.objectstore": "bluestore",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.osd_id": "0",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.type": "block",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.vdo": "0",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.with_tpm": "0"
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             },
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "type": "block",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "vg_name": "ceph_vg0"
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:         }
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:     ],
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:     "1": [
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:         {
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "devices": [
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "/dev/loop4"
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             ],
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "lv_name": "ceph_lv1",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "lv_size": "21470642176",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "name": "ceph_lv1",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "tags": {
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.cluster_name": "ceph",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.crush_device_class": "",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.encrypted": "0",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.objectstore": "bluestore",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.osd_id": "1",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.type": "block",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.vdo": "0",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.with_tpm": "0"
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             },
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "type": "block",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "vg_name": "ceph_vg1"
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:         }
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:     ],
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:     "2": [
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:         {
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "devices": [
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "/dev/loop5"
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             ],
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "lv_name": "ceph_lv2",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "lv_size": "21470642176",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "name": "ceph_lv2",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "tags": {
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.cluster_name": "ceph",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.crush_device_class": "",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.encrypted": "0",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.objectstore": "bluestore",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:14:11 compute-0 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.osd_id": "2",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.type": "block",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.vdo": "0",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:                 "ceph.with_tpm": "0"
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             },
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "type": "block",
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:             "vg_name": "ceph_vg2"
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:         }
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]:     ]
Jan 29 09:14:11 compute-0 intelligent_robinson[94532]: }
Jan 29 09:14:11 compute-0 systemd[1]: libpod-a76891cb71f0b3d354013434c75af62c383863ca438a075840b480879ed13b6a.scope: Deactivated successfully.
Jan 29 09:14:11 compute-0 podman[94504]: 2026-01-29 09:14:11.342332614 +0000 UTC m=+1.323898888 container died a76891cb71f0b3d354013434c75af62c383863ca438a075840b480879ed13b6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:14:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-12ecccb040a73b3a4c475710dba640a7fe8b217d9518943b1f646f9ac2102ea2-merged.mount: Deactivated successfully.
Jan 29 09:14:11 compute-0 podman[94504]: 2026-01-29 09:14:11.414440654 +0000 UTC m=+1.396006928 container remove a76891cb71f0b3d354013434c75af62c383863ca438a075840b480879ed13b6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_robinson, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:14:11 compute-0 systemd[1]: libpod-conmon-a76891cb71f0b3d354013434c75af62c383863ca438a075840b480879ed13b6a.scope: Deactivated successfully.
Jan 29 09:14:11 compute-0 sudo[94389]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:11 compute-0 sudo[94559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:14:11 compute-0 sudo[94559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:11 compute-0 sudo[94559]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:11 compute-0 ceph-mds[93566]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 29 09:14:11 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mds-cephfs-compute-0-eawrqy[93562]: 2026-01-29T09:14:11.578+0000 7f0c4b36c640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 29 09:14:11 compute-0 sudo[94628]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqkpaaecijgcuefgcwafulmvrvarugvr ; /usr/bin/python3'
Jan 29 09:14:11 compute-0 sudo[94628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:14:11 compute-0 sudo[94590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:14:11 compute-0 sudo[94590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:11 compute-0 python3[94633]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:14:11 compute-0 podman[94635]: 2026-01-29 09:14:11.813409379 +0000 UTC m=+0.024474380 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:14:11 compute-0 podman[94635]: 2026-01-29 09:14:11.908890258 +0000 UTC m=+0.119955229 container create ffcd6c537ed3d331f3226ce0659e3e8d35efca26f0fe1b086e5dd6231a51c3a4 (image=quay.io/ceph/ceph:v20, name=upbeat_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 29 09:14:11 compute-0 ceph-mon[75183]: from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 29 09:14:11 compute-0 ceph-mon[75183]: 4.17 scrub starts
Jan 29 09:14:11 compute-0 ceph-mon[75183]: 4.17 scrub ok
Jan 29 09:14:11 compute-0 ceph-mon[75183]: 2.14 scrub starts
Jan 29 09:14:11 compute-0 ceph-mon[75183]: 2.14 scrub ok
Jan 29 09:14:11 compute-0 podman[94661]: 2026-01-29 09:14:11.958897743 +0000 UTC m=+0.121429098 container create c0ff32b89cd7bfb2a4e35b9eb3b64b512d2a6a335b2a1f7175843124ea2bf6e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 29 09:14:11 compute-0 systemd[1]: Started libpod-conmon-ffcd6c537ed3d331f3226ce0659e3e8d35efca26f0fe1b086e5dd6231a51c3a4.scope.
Jan 29 09:14:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s wr, 5 op/s
Jan 29 09:14:11 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:12 compute-0 systemd[1]: Started libpod-conmon-c0ff32b89cd7bfb2a4e35b9eb3b64b512d2a6a335b2a1f7175843124ea2bf6e3.scope.
Jan 29 09:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd1c62aec9fdd0762929e8109c860981bb8025edb910347719c5be3ed835f1b1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd1c62aec9fdd0762929e8109c860981bb8025edb910347719c5be3ed835f1b1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:12 compute-0 podman[94661]: 2026-01-29 09:14:11.916740839 +0000 UTC m=+0.079272214 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:14:12 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:12 compute-0 podman[94635]: 2026-01-29 09:14:12.046258484 +0000 UTC m=+0.257323475 container init ffcd6c537ed3d331f3226ce0659e3e8d35efca26f0fe1b086e5dd6231a51c3a4 (image=quay.io/ceph/ceph:v20, name=upbeat_aryabhata, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:14:12 compute-0 podman[94635]: 2026-01-29 09:14:12.053963921 +0000 UTC m=+0.265028902 container start ffcd6c537ed3d331f3226ce0659e3e8d35efca26f0fe1b086e5dd6231a51c3a4 (image=quay.io/ceph/ceph:v20, name=upbeat_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:14:12 compute-0 podman[94635]: 2026-01-29 09:14:12.063100097 +0000 UTC m=+0.274165078 container attach ffcd6c537ed3d331f3226ce0659e3e8d35efca26f0fe1b086e5dd6231a51c3a4 (image=quay.io/ceph/ceph:v20, name=upbeat_aryabhata, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:14:12 compute-0 podman[94661]: 2026-01-29 09:14:12.087592886 +0000 UTC m=+0.250124271 container init c0ff32b89cd7bfb2a4e35b9eb3b64b512d2a6a335b2a1f7175843124ea2bf6e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wu, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 09:14:12 compute-0 podman[94661]: 2026-01-29 09:14:12.093876775 +0000 UTC m=+0.256408130 container start c0ff32b89cd7bfb2a4e35b9eb3b64b512d2a6a335b2a1f7175843124ea2bf6e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 29 09:14:12 compute-0 quizzical_wu[94683]: 167 167
Jan 29 09:14:12 compute-0 systemd[1]: libpod-c0ff32b89cd7bfb2a4e35b9eb3b64b512d2a6a335b2a1f7175843124ea2bf6e3.scope: Deactivated successfully.
Jan 29 09:14:12 compute-0 podman[94661]: 2026-01-29 09:14:12.099367413 +0000 UTC m=+0.261898788 container attach c0ff32b89cd7bfb2a4e35b9eb3b64b512d2a6a335b2a1f7175843124ea2bf6e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wu, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:14:12 compute-0 podman[94661]: 2026-01-29 09:14:12.099921527 +0000 UTC m=+0.262452902 container died c0ff32b89cd7bfb2a4e35b9eb3b64b512d2a6a335b2a1f7175843124ea2bf6e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 29 09:14:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1923a95d055d9463d39e5b1e9c874d4aa944e71f934cc8b89aee8d183685998-merged.mount: Deactivated successfully.
Jan 29 09:14:12 compute-0 podman[94661]: 2026-01-29 09:14:12.271787752 +0000 UTC m=+0.434319107 container remove c0ff32b89cd7bfb2a4e35b9eb3b64b512d2a6a335b2a1f7175843124ea2bf6e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wu, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:14:12 compute-0 ansible-async_wrapper.py[93964]: Done in kid B.
Jan 29 09:14:12 compute-0 systemd[1]: libpod-conmon-c0ff32b89cd7bfb2a4e35b9eb3b64b512d2a6a335b2a1f7175843124ea2bf6e3.scope: Deactivated successfully.
Jan 29 09:14:12 compute-0 podman[94726]: 2026-01-29 09:14:12.442884065 +0000 UTC m=+0.080471456 container create 8049a8da5fafe94f0e1f5257fc89fe319108ed68cba998169448bcdaab735a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_tesla, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 29 09:14:12 compute-0 podman[94726]: 2026-01-29 09:14:12.383461666 +0000 UTC m=+0.021049087 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:14:12 compute-0 systemd[1]: Started libpod-conmon-8049a8da5fafe94f0e1f5257fc89fe319108ed68cba998169448bcdaab735a4b.scope.
Jan 29 09:14:12 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 29 09:14:12 compute-0 upbeat_aryabhata[94678]: 
Jan 29 09:14:12 compute-0 upbeat_aryabhata[94678]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}]
Jan 29 09:14:12 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8263a78d6d8c0bad97fa2cdeb700ac0c12cfed4163c94464263f5c0091add324/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8263a78d6d8c0bad97fa2cdeb700ac0c12cfed4163c94464263f5c0091add324/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8263a78d6d8c0bad97fa2cdeb700ac0c12cfed4163c94464263f5c0091add324/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8263a78d6d8c0bad97fa2cdeb700ac0c12cfed4163c94464263f5c0091add324/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:12 compute-0 systemd[1]: libpod-ffcd6c537ed3d331f3226ce0659e3e8d35efca26f0fe1b086e5dd6231a51c3a4.scope: Deactivated successfully.
Jan 29 09:14:12 compute-0 podman[94726]: 2026-01-29 09:14:12.563430398 +0000 UTC m=+0.201017819 container init 8049a8da5fafe94f0e1f5257fc89fe319108ed68cba998169448bcdaab735a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:14:12 compute-0 podman[94726]: 2026-01-29 09:14:12.57093176 +0000 UTC m=+0.208519151 container start 8049a8da5fafe94f0e1f5257fc89fe319108ed68cba998169448bcdaab735a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_tesla, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 29 09:14:12 compute-0 podman[94726]: 2026-01-29 09:14:12.626457494 +0000 UTC m=+0.264044885 container attach 8049a8da5fafe94f0e1f5257fc89fe319108ed68cba998169448bcdaab735a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 29 09:14:12 compute-0 podman[94635]: 2026-01-29 09:14:12.64301682 +0000 UTC m=+0.854081801 container died ffcd6c537ed3d331f3226ce0659e3e8d35efca26f0fe1b086e5dd6231a51c3a4 (image=quay.io/ceph/ceph:v20, name=upbeat_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:14:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd1c62aec9fdd0762929e8109c860981bb8025edb910347719c5be3ed835f1b1-merged.mount: Deactivated successfully.
Jan 29 09:14:12 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 29 09:14:12 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 29 09:14:13 compute-0 podman[94747]: 2026-01-29 09:14:13.026928739 +0000 UTC m=+0.474690383 container remove ffcd6c537ed3d331f3226ce0659e3e8d35efca26f0fe1b086e5dd6231a51c3a4 (image=quay.io/ceph/ceph:v20, name=upbeat_aryabhata, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 29 09:14:13 compute-0 systemd[1]: libpod-conmon-ffcd6c537ed3d331f3226ce0659e3e8d35efca26f0fe1b086e5dd6231a51c3a4.scope: Deactivated successfully.
Jan 29 09:14:13 compute-0 sudo[94628]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:13 compute-0 ceph-mon[75183]: pgmap v105: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s wr, 5 op/s
Jan 29 09:14:13 compute-0 lvm[94835]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:14:13 compute-0 lvm[94835]: VG ceph_vg0 finished
Jan 29 09:14:13 compute-0 lvm[94838]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:14:13 compute-0 lvm[94838]: VG ceph_vg1 finished
Jan 29 09:14:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:14:13 compute-0 lvm[94840]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:14:13 compute-0 lvm[94840]: VG ceph_vg2 finished
Jan 29 09:14:13 compute-0 bold_tesla[94742]: {}
Jan 29 09:14:13 compute-0 systemd[1]: libpod-8049a8da5fafe94f0e1f5257fc89fe319108ed68cba998169448bcdaab735a4b.scope: Deactivated successfully.
Jan 29 09:14:13 compute-0 systemd[1]: libpod-8049a8da5fafe94f0e1f5257fc89fe319108ed68cba998169448bcdaab735a4b.scope: Consumed 1.308s CPU time.
Jan 29 09:14:13 compute-0 podman[94726]: 2026-01-29 09:14:13.476878075 +0000 UTC m=+1.114465466 container died 8049a8da5fafe94f0e1f5257fc89fe319108ed68cba998169448bcdaab735a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:14:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-8263a78d6d8c0bad97fa2cdeb700ac0c12cfed4163c94464263f5c0091add324-merged.mount: Deactivated successfully.
Jan 29 09:14:13 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Jan 29 09:14:13 compute-0 podman[94726]: 2026-01-29 09:14:13.701086757 +0000 UTC m=+1.338674178 container remove 8049a8da5fafe94f0e1f5257fc89fe319108ed68cba998169448bcdaab735a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_tesla, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 29 09:14:13 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Jan 29 09:14:13 compute-0 systemd[1]: libpod-conmon-8049a8da5fafe94f0e1f5257fc89fe319108ed68cba998169448bcdaab735a4b.scope: Deactivated successfully.
Jan 29 09:14:13 compute-0 sudo[94590]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:14:13 compute-0 sudo[94878]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbmekysijktgdfdipehgpnygkyeqqgwt ; /usr/bin/python3'
Jan 29 09:14:13 compute-0 sudo[94878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:14:13 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:14:13 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:13 compute-0 sudo[94881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:14:13 compute-0 sudo[94881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:13 compute-0 sudo[94881]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:13 compute-0 python3[94880]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:14:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s wr, 5 op/s
Jan 29 09:14:13 compute-0 sudo[94906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:14:13 compute-0 sudo[94906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:13 compute-0 sudo[94906]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:14 compute-0 podman[94926]: 2026-01-29 09:14:14.009950247 +0000 UTC m=+0.043629255 container create c311b37504b8abb4052c9f71332112a84b3f3ffa60e44cf8bd48429eb23c2414 (image=quay.io/ceph/ceph:v20, name=interesting_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 29 09:14:14 compute-0 sudo[94941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 29 09:14:14 compute-0 sudo[94941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:14 compute-0 systemd[1]: Started libpod-conmon-c311b37504b8abb4052c9f71332112a84b3f3ffa60e44cf8bd48429eb23c2414.scope.
Jan 29 09:14:14 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:14 compute-0 ceph-mon[75183]: from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 29 09:14:14 compute-0 ceph-mon[75183]: 2.12 scrub starts
Jan 29 09:14:14 compute-0 ceph-mon[75183]: 2.12 scrub ok
Jan 29 09:14:14 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:14 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:14 compute-0 podman[94926]: 2026-01-29 09:14:13.992329163 +0000 UTC m=+0.026008201 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/655e1bd05e9c31a5aa249dc11ed0e56b4119e335216462f9e643e9f66c189e51/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/655e1bd05e9c31a5aa249dc11ed0e56b4119e335216462f9e643e9f66c189e51/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:14 compute-0 podman[94926]: 2026-01-29 09:14:14.11522081 +0000 UTC m=+0.148899838 container init c311b37504b8abb4052c9f71332112a84b3f3ffa60e44cf8bd48429eb23c2414 (image=quay.io/ceph/ceph:v20, name=interesting_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 09:14:14 compute-0 podman[94926]: 2026-01-29 09:14:14.123870972 +0000 UTC m=+0.157549980 container start c311b37504b8abb4052c9f71332112a84b3f3ffa60e44cf8bd48429eb23c2414 (image=quay.io/ceph/ceph:v20, name=interesting_sanderson, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:14:14 compute-0 podman[94926]: 2026-01-29 09:14:14.13493118 +0000 UTC m=+0.168610328 container attach c311b37504b8abb4052c9f71332112a84b3f3ffa60e44cf8bd48429eb23c2414 (image=quay.io/ceph/ceph:v20, name=interesting_sanderson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:14:14 compute-0 podman[95038]: 2026-01-29 09:14:14.529288809 +0000 UTC m=+0.097088322 container exec 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:14:14 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14252 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 29 09:14:14 compute-0 interesting_sanderson[94972]: 
Jan 29 09:14:14 compute-0 interesting_sanderson[94972]: [{"container_id": "8ed393bfd921", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.15%", "created": "2026-01-29T09:12:20.039962Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-29T09:12:20.140914Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-29T09:14:07.646959Z", "memory_usage": 7795113, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-01-29T09:12:19.938923Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437@crash.compute-0", "version": "20.2.0"}, {"container_id": "47537243927a", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "7.25%", "created": "2026-01-29T09:14:06.100775Z", "daemon_id": "cephfs.compute-0.eawrqy", "daemon_name": "mds.cephfs.compute-0.eawrqy", "daemon_type": "mds", "events": ["2026-01-29T09:14:06.184634Z daemon:mds.cephfs.compute-0.eawrqy [INFO] \"Deployed mds.cephfs.compute-0.eawrqy on host 'compute-0'\""], "hostname": "compute-0", "is_active": true, "last_refresh": "2026-01-29T09:14:07.647382Z", "memory_usage": 16064184, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2026-01-29T09:14:05.996358Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437@mds.cephfs.compute-0.eawrqy", "version": "20.2.0"}, {"container_id": "673f2b22a08b", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "14.00%", "created": "2026-01-29T09:11:39.697875Z", "daemon_id": "compute-0.ucpkkb", "daemon_name": "mgr.compute-0.ucpkkb", "daemon_type": "mgr", "events": ["2026-01-29T09:12:25.448823Z daemon:mgr.compute-0.ucpkkb [INFO] \"Reconfigured mgr.compute-0.ucpkkb on host 'compute-0'\""], "hostname": "compute-0", "is_active": true, "last_refresh": "2026-01-29T09:14:07.646852Z", "memory_usage": 549663539, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-29T09:11:39.616884Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437@mgr.compute-0.ucpkkb", "version": "20.2.0"}, {"container_id": "19fe20f3e43e", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.44%", "created": "2026-01-29T09:11:36.018776Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-29T09:12:24.643786Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-29T09:14:07.646695Z", "memory_request": 2147483648, "memory_usage": 43610275, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-01-29T09:11:37.947199Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437@mon.compute-0", "version": "20.2.0"}, {"container_id": "55decf3a5ce4", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.54%", "created": "2026-01-29T09:12:46.074665Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-01-29T09:12:46.218245Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-29T09:14:07.647061Z", "memory_request": 4294967296, "memory_usage": 69101158, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-29T09:12:45.896452Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437@osd.0", "version": "20.2.0"}, {"container_id": "6b017a93c8d5", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.82%", "created": "2026-01-29T09:12:55.334914Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-01-29T09:12:57.696612Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-29T09:14:07.647180Z", "memory_request": 4294967296, "memory_usage": 67916267, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-29T09:12:53.168402Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437@osd.1", "version": "20.2.0"}, {"container_id": "1b320510371f", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "2.00%", "created": "2026-01-29T09:13:05.544190Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-01-29T09:13:05.772061Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-29T09:14:07.647281Z", "memory_request": 4294967296, "memory_usage": 66238545, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-29T09:13:05.312022Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437@osd.2", "version": "20.2.0"}]
Jan 29 09:14:14 compute-0 systemd[1]: libpod-c311b37504b8abb4052c9f71332112a84b3f3ffa60e44cf8bd48429eb23c2414.scope: Deactivated successfully.
Jan 29 09:14:14 compute-0 podman[94926]: 2026-01-29 09:14:14.601722988 +0000 UTC m=+0.635401996 container died c311b37504b8abb4052c9f71332112a84b3f3ffa60e44cf8bd48429eb23c2414 (image=quay.io/ceph/ceph:v20, name=interesting_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 29 09:14:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-655e1bd05e9c31a5aa249dc11ed0e56b4119e335216462f9e643e9f66c189e51-merged.mount: Deactivated successfully.
Jan 29 09:14:14 compute-0 podman[94926]: 2026-01-29 09:14:14.673561001 +0000 UTC m=+0.707240009 container remove c311b37504b8abb4052c9f71332112a84b3f3ffa60e44cf8bd48429eb23c2414 (image=quay.io/ceph/ceph:v20, name=interesting_sanderson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:14:14 compute-0 podman[95038]: 2026-01-29 09:14:14.678892865 +0000 UTC m=+0.246692358 container exec_died 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 29 09:14:14 compute-0 sudo[94878]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:14 compute-0 systemd[1]: libpod-conmon-c311b37504b8abb4052c9f71332112a84b3f3ffa60e44cf8bd48429eb23c2414.scope: Deactivated successfully.
Jan 29 09:14:15 compute-0 ceph-mon[75183]: 6.1a scrub starts
Jan 29 09:14:15 compute-0 ceph-mon[75183]: 6.1a scrub ok
Jan 29 09:14:15 compute-0 ceph-mon[75183]: pgmap v106: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s wr, 5 op/s
Jan 29 09:14:15 compute-0 sudo[94941]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:14:15 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:14:15 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:14:15 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:14:15 compute-0 sudo[95243]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znowybfcbprfdtedkvkwszeloilwejpy ; /usr/bin/python3'
Jan 29 09:14:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:14:15 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:14:15 compute-0 sudo[95243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:14:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:14:15 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:14:15 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:14:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:14:15 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:14:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:14:15 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:14:15 compute-0 sudo[95246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:14:15 compute-0 sudo[95246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:15 compute-0 sudo[95246]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:15 compute-0 sudo[95271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:14:15 compute-0 sudo[95271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:15 compute-0 python3[95245]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:14:15 compute-0 podman[95296]: 2026-01-29 09:14:15.759268982 +0000 UTC m=+0.025206329 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:14:15 compute-0 podman[95296]: 2026-01-29 09:14:15.872814327 +0000 UTC m=+0.138751644 container create 8bf2b24e1334d26b2891ae796047976e5c008d8e6ce7a99dd1326584271b4f78 (image=quay.io/ceph/ceph:v20, name=fervent_leavitt, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 29 09:14:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v107: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s wr, 4 op/s
Jan 29 09:14:15 compute-0 systemd[1]: Started libpod-conmon-8bf2b24e1334d26b2891ae796047976e5c008d8e6ce7a99dd1326584271b4f78.scope.
Jan 29 09:14:15 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c70e460c5eb67723b7bfc9c6bc8287cd37a76065b2d6dc073e6611f917e8aff8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c70e460c5eb67723b7bfc9c6bc8287cd37a76065b2d6dc073e6611f917e8aff8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:16 compute-0 podman[95319]: 2026-01-29 09:14:15.959395927 +0000 UTC m=+0.062367729 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:14:16 compute-0 podman[95319]: 2026-01-29 09:14:16.154940888 +0000 UTC m=+0.257912670 container create de9164061e7fcef5b8d1c232f8d025787064a1812e1bbc7c3a8f7dacf40b6584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_zhukovsky, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:14:16 compute-0 ceph-mon[75183]: from='client.14252 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 29 09:14:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:14:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:14:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:14:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:14:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:14:16 compute-0 systemd[1]: Started libpod-conmon-de9164061e7fcef5b8d1c232f8d025787064a1812e1bbc7c3a8f7dacf40b6584.scope.
Jan 29 09:14:16 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:16 compute-0 podman[95296]: 2026-01-29 09:14:16.25687023 +0000 UTC m=+0.522807578 container init 8bf2b24e1334d26b2891ae796047976e5c008d8e6ce7a99dd1326584271b4f78 (image=quay.io/ceph/ceph:v20, name=fervent_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:14:16 compute-0 podman[95296]: 2026-01-29 09:14:16.263944661 +0000 UTC m=+0.529881978 container start 8bf2b24e1334d26b2891ae796047976e5c008d8e6ce7a99dd1326584271b4f78 (image=quay.io/ceph/ceph:v20, name=fervent_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 29 09:14:16 compute-0 podman[95319]: 2026-01-29 09:14:16.275106021 +0000 UTC m=+0.378077823 container init de9164061e7fcef5b8d1c232f8d025787064a1812e1bbc7c3a8f7dacf40b6584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 09:14:16 compute-0 podman[95319]: 2026-01-29 09:14:16.27989856 +0000 UTC m=+0.382870342 container start de9164061e7fcef5b8d1c232f8d025787064a1812e1bbc7c3a8f7dacf40b6584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_zhukovsky, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 29 09:14:16 compute-0 stoic_zhukovsky[95340]: 167 167
Jan 29 09:14:16 compute-0 systemd[1]: libpod-de9164061e7fcef5b8d1c232f8d025787064a1812e1bbc7c3a8f7dacf40b6584.scope: Deactivated successfully.
Jan 29 09:14:16 compute-0 podman[95319]: 2026-01-29 09:14:16.28693955 +0000 UTC m=+0.389911332 container attach de9164061e7fcef5b8d1c232f8d025787064a1812e1bbc7c3a8f7dacf40b6584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:14:16 compute-0 podman[95319]: 2026-01-29 09:14:16.287367831 +0000 UTC m=+0.390339613 container died de9164061e7fcef5b8d1c232f8d025787064a1812e1bbc7c3a8f7dacf40b6584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_zhukovsky, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 29 09:14:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8bea392e9a43f42bfc4286cf77bf24e56cd5d5b673573518bb0c42e328903e5-merged.mount: Deactivated successfully.
Jan 29 09:14:16 compute-0 podman[95319]: 2026-01-29 09:14:16.34938719 +0000 UTC m=+0.452358972 container remove de9164061e7fcef5b8d1c232f8d025787064a1812e1bbc7c3a8f7dacf40b6584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 09:14:16 compute-0 systemd[1]: libpod-conmon-de9164061e7fcef5b8d1c232f8d025787064a1812e1bbc7c3a8f7dacf40b6584.scope: Deactivated successfully.
Jan 29 09:14:16 compute-0 podman[95296]: 2026-01-29 09:14:16.378853052 +0000 UTC m=+0.644790409 container attach 8bf2b24e1334d26b2891ae796047976e5c008d8e6ce7a99dd1326584271b4f78 (image=quay.io/ceph/ceph:v20, name=fervent_leavitt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 29 09:14:16 compute-0 podman[95385]: 2026-01-29 09:14:16.508998534 +0000 UTC m=+0.055893785 container create 741a29e28324ee2ca182305c810ec070aa6e6f48e6c54adb3c975178bbd503e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:14:16 compute-0 podman[95385]: 2026-01-29 09:14:16.479737027 +0000 UTC m=+0.026632298 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:14:16 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 29 09:14:16 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 29 09:14:16 compute-0 systemd[1]: Started libpod-conmon-741a29e28324ee2ca182305c810ec070aa6e6f48e6c54adb3c975178bbd503e0.scope.
Jan 29 09:14:16 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fcef5c4bf9a951b3be55b7ef38a8b3263d54d0f6b3851b2f73459562528ad59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fcef5c4bf9a951b3be55b7ef38a8b3263d54d0f6b3851b2f73459562528ad59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fcef5c4bf9a951b3be55b7ef38a8b3263d54d0f6b3851b2f73459562528ad59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fcef5c4bf9a951b3be55b7ef38a8b3263d54d0f6b3851b2f73459562528ad59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fcef5c4bf9a951b3be55b7ef38a8b3263d54d0f6b3851b2f73459562528ad59/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:16 compute-0 podman[95385]: 2026-01-29 09:14:16.722961471 +0000 UTC m=+0.269856712 container init 741a29e28324ee2ca182305c810ec070aa6e6f48e6c54adb3c975178bbd503e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_torvalds, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:14:16 compute-0 podman[95385]: 2026-01-29 09:14:16.728533251 +0000 UTC m=+0.275428502 container start 741a29e28324ee2ca182305c810ec070aa6e6f48e6c54adb3c975178bbd503e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_torvalds, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 29 09:14:16 compute-0 podman[95385]: 2026-01-29 09:14:16.735004995 +0000 UTC m=+0.281900276 container attach 741a29e28324ee2ca182305c810ec070aa6e6f48e6c54adb3c975178bbd503e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_torvalds, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:14:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 29 09:14:16 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1265239940' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 29 09:14:16 compute-0 fervent_leavitt[95335]: 
Jan 29 09:14:16 compute-0 fervent_leavitt[95335]: {"fsid":"3fdce3ca-565d-5459-88e8-1ffe58b48437","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":158,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":46,"num_osds":3,"num_up_osds":3,"osd_up_since":1769677998,"num_in_osds":3,"osd_in_since":1769677957,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":24,"data_bytes":461710,"bytes_used":84357120,"bytes_avail":64327569408,"bytes_total":64411926528,"write_bytes_sec":1519,"read_op_per_sec":0,"write_op_per_sec":4},"fsmap":{"epoch":5,"btime":"2026-01-29T09:14:08:598338+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.eawrqy","status":"up:active","gid":14244}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":4,"modified":"2026-01-29T09:14:15.973153+0000","services":{"mds":{"daemons":{"summary":"","cephfs.compute-0.eawrqy":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 29 09:14:16 compute-0 systemd[1]: libpod-8bf2b24e1334d26b2891ae796047976e5c008d8e6ce7a99dd1326584271b4f78.scope: Deactivated successfully.
Jan 29 09:14:16 compute-0 conmon[95335]: conmon 8bf2b24e1334d26b2891 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8bf2b24e1334d26b2891ae796047976e5c008d8e6ce7a99dd1326584271b4f78.scope/container/memory.events
Jan 29 09:14:16 compute-0 podman[95296]: 2026-01-29 09:14:16.8545162 +0000 UTC m=+1.120453507 container died 8bf2b24e1334d26b2891ae796047976e5c008d8e6ce7a99dd1326584271b4f78 (image=quay.io/ceph/ceph:v20, name=fervent_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:14:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c70e460c5eb67723b7bfc9c6bc8287cd37a76065b2d6dc073e6611f917e8aff8-merged.mount: Deactivated successfully.
Jan 29 09:14:16 compute-0 podman[95296]: 2026-01-29 09:14:16.976741799 +0000 UTC m=+1.242679116 container remove 8bf2b24e1334d26b2891ae796047976e5c008d8e6ce7a99dd1326584271b4f78 (image=quay.io/ceph/ceph:v20, name=fervent_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 09:14:16 compute-0 systemd[1]: libpod-conmon-8bf2b24e1334d26b2891ae796047976e5c008d8e6ce7a99dd1326584271b4f78.scope: Deactivated successfully.
Jan 29 09:14:17 compute-0 sudo[95243]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:17 compute-0 admiring_torvalds[95403]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:14:17 compute-0 admiring_torvalds[95403]: --> All data devices are unavailable
Jan 29 09:14:17 compute-0 systemd[1]: libpod-741a29e28324ee2ca182305c810ec070aa6e6f48e6c54adb3c975178bbd503e0.scope: Deactivated successfully.
Jan 29 09:14:17 compute-0 podman[95385]: 2026-01-29 09:14:17.477983475 +0000 UTC m=+1.024878726 container died 741a29e28324ee2ca182305c810ec070aa6e6f48e6c54adb3c975178bbd503e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 09:14:17 compute-0 ceph-mon[75183]: pgmap v107: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s wr, 4 op/s
Jan 29 09:14:17 compute-0 ceph-mon[75183]: 4.16 scrub starts
Jan 29 09:14:17 compute-0 ceph-mon[75183]: 4.16 scrub ok
Jan 29 09:14:17 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1265239940' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 29 09:14:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fcef5c4bf9a951b3be55b7ef38a8b3263d54d0f6b3851b2f73459562528ad59-merged.mount: Deactivated successfully.
Jan 29 09:14:17 compute-0 podman[95385]: 2026-01-29 09:14:17.531424703 +0000 UTC m=+1.078319954 container remove 741a29e28324ee2ca182305c810ec070aa6e6f48e6c54adb3c975178bbd503e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_torvalds, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 29 09:14:17 compute-0 systemd[1]: libpod-conmon-741a29e28324ee2ca182305c810ec070aa6e6f48e6c54adb3c975178bbd503e0.scope: Deactivated successfully.
Jan 29 09:14:17 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 29 09:14:17 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 29 09:14:17 compute-0 sudo[95271]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:17 compute-0 sudo[95449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:14:17 compute-0 sudo[95449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:17 compute-0 sudo[95449]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:17 compute-0 sudo[95474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:14:17 compute-0 sudo[95474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:17 compute-0 sudo[95522]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yugybgyzxvlilajhkrwnjzqptzzbgkeo ; /usr/bin/python3'
Jan 29 09:14:17 compute-0 sudo[95522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:14:17 compute-0 python3[95524]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:14:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v108: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s wr, 4 op/s
Jan 29 09:14:18 compute-0 podman[95532]: 2026-01-29 09:14:18.081300107 +0000 UTC m=+0.098123030 container create cec7c90869ba01ffb98c2e82c3bf0f2b2a022e875b93bf4bf4bd6a95c50bfb4f (image=quay.io/ceph/ceph:v20, name=relaxed_mahavira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 09:14:18 compute-0 podman[95532]: 2026-01-29 09:14:18.008456518 +0000 UTC m=+0.025279471 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:14:18 compute-0 podman[95547]: 2026-01-29 09:14:18.118726154 +0000 UTC m=+0.107677497 container create 69c732838b3bc6b100b29834422640a2720103681da00ef419188eaff92bf2f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_moore, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:14:18 compute-0 systemd[1]: Started libpod-conmon-cec7c90869ba01ffb98c2e82c3bf0f2b2a022e875b93bf4bf4bd6a95c50bfb4f.scope.
Jan 29 09:14:18 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:18 compute-0 systemd[1]: Started libpod-conmon-69c732838b3bc6b100b29834422640a2720103681da00ef419188eaff92bf2f4.scope.
Jan 29 09:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67592de51ee9c01d7f0961f6a903f4a51317559a536fd2a0dc00abfaf72eca46/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67592de51ee9c01d7f0961f6a903f4a51317559a536fd2a0dc00abfaf72eca46/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:18 compute-0 podman[95547]: 2026-01-29 09:14:18.089112547 +0000 UTC m=+0.078063940 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:14:18 compute-0 podman[95532]: 2026-01-29 09:14:18.185240653 +0000 UTC m=+0.202063606 container init cec7c90869ba01ffb98c2e82c3bf0f2b2a022e875b93bf4bf4bd6a95c50bfb4f (image=quay.io/ceph/ceph:v20, name=relaxed_mahavira, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:14:18 compute-0 podman[95532]: 2026-01-29 09:14:18.191835651 +0000 UTC m=+0.208658574 container start cec7c90869ba01ffb98c2e82c3bf0f2b2a022e875b93bf4bf4bd6a95c50bfb4f (image=quay.io/ceph/ceph:v20, name=relaxed_mahavira, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Jan 29 09:14:18 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:18 compute-0 podman[95532]: 2026-01-29 09:14:18.197192255 +0000 UTC m=+0.214015178 container attach cec7c90869ba01ffb98c2e82c3bf0f2b2a022e875b93bf4bf4bd6a95c50bfb4f (image=quay.io/ceph/ceph:v20, name=relaxed_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 09:14:18 compute-0 podman[95547]: 2026-01-29 09:14:18.211938322 +0000 UTC m=+0.200889705 container init 69c732838b3bc6b100b29834422640a2720103681da00ef419188eaff92bf2f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_moore, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:14:18 compute-0 podman[95547]: 2026-01-29 09:14:18.216557556 +0000 UTC m=+0.205508909 container start 69c732838b3bc6b100b29834422640a2720103681da00ef419188eaff92bf2f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Jan 29 09:14:18 compute-0 zealous_moore[95569]: 167 167
Jan 29 09:14:18 compute-0 systemd[1]: libpod-69c732838b3bc6b100b29834422640a2720103681da00ef419188eaff92bf2f4.scope: Deactivated successfully.
Jan 29 09:14:18 compute-0 podman[95547]: 2026-01-29 09:14:18.237844309 +0000 UTC m=+0.226795672 container attach 69c732838b3bc6b100b29834422640a2720103681da00ef419188eaff92bf2f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_moore, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 29 09:14:18 compute-0 podman[95547]: 2026-01-29 09:14:18.238754393 +0000 UTC m=+0.227705746 container died 69c732838b3bc6b100b29834422640a2720103681da00ef419188eaff92bf2f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_moore, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 09:14:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:14:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-13eca68ccb9d86c1be84dd7a05ba78c4a0e7551352cb1b0ae55c81c2c698c811-merged.mount: Deactivated successfully.
Jan 29 09:14:18 compute-0 podman[95547]: 2026-01-29 09:14:18.384638948 +0000 UTC m=+0.373590301 container remove 69c732838b3bc6b100b29834422640a2720103681da00ef419188eaff92bf2f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 29 09:14:18 compute-0 systemd[1]: libpod-conmon-69c732838b3bc6b100b29834422640a2720103681da00ef419188eaff92bf2f4.scope: Deactivated successfully.
Jan 29 09:14:18 compute-0 ceph-mon[75183]: 7.1d scrub starts
Jan 29 09:14:18 compute-0 ceph-mon[75183]: 7.1d scrub ok
Jan 29 09:14:18 compute-0 ceph-mon[75183]: pgmap v108: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s wr, 4 op/s
Jan 29 09:14:18 compute-0 podman[95616]: 2026-01-29 09:14:18.533173145 +0000 UTC m=+0.056415439 container create 6ebf3f43d2d55439fe9f4fa00d258ce7ba8162d4d6a9955472fea118f1c85dbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ramanujan, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 29 09:14:18 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 29 09:14:18 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 29 09:14:18 compute-0 systemd[1]: Started libpod-conmon-6ebf3f43d2d55439fe9f4fa00d258ce7ba8162d4d6a9955472fea118f1c85dbb.scope.
Jan 29 09:14:18 compute-0 podman[95616]: 2026-01-29 09:14:18.503468825 +0000 UTC m=+0.026711139 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:14:18 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3a874874e89c1168b77ce9acd7e22473f622492ccc47801ea63f38a25b092e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3a874874e89c1168b77ce9acd7e22473f622492ccc47801ea63f38a25b092e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3a874874e89c1168b77ce9acd7e22473f622492ccc47801ea63f38a25b092e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3a874874e89c1168b77ce9acd7e22473f622492ccc47801ea63f38a25b092e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:18 compute-0 podman[95616]: 2026-01-29 09:14:18.637032589 +0000 UTC m=+0.160274913 container init 6ebf3f43d2d55439fe9f4fa00d258ce7ba8162d4d6a9955472fea118f1c85dbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:14:18 compute-0 podman[95616]: 2026-01-29 09:14:18.643468872 +0000 UTC m=+0.166711166 container start 6ebf3f43d2d55439fe9f4fa00d258ce7ba8162d4d6a9955472fea118f1c85dbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ramanujan, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 29 09:14:18 compute-0 podman[95616]: 2026-01-29 09:14:18.665956277 +0000 UTC m=+0.189198591 container attach 6ebf3f43d2d55439fe9f4fa00d258ce7ba8162d4d6a9955472fea118f1c85dbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ramanujan, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:14:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 29 09:14:18 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1051010067' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 29 09:14:18 compute-0 relaxed_mahavira[95564]: 
Jan 29 09:14:18 compute-0 systemd[1]: libpod-cec7c90869ba01ffb98c2e82c3bf0f2b2a022e875b93bf4bf4bd6a95c50bfb4f.scope: Deactivated successfully.
Jan 29 09:14:18 compute-0 conmon[95564]: conmon cec7c90869ba01ffb98c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cec7c90869ba01ffb98c2e82c3bf0f2b2a022e875b93bf4bf4bd6a95c50bfb4f.scope/container/memory.events
Jan 29 09:14:18 compute-0 relaxed_mahavira[95564]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""}]
Jan 29 09:14:18 compute-0 podman[95532]: 2026-01-29 09:14:18.720156966 +0000 UTC m=+0.736979899 container died cec7c90869ba01ffb98c2e82c3bf0f2b2a022e875b93bf4bf4bd6a95c50bfb4f (image=quay.io/ceph/ceph:v20, name=relaxed_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 29 09:14:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-67592de51ee9c01d7f0961f6a903f4a51317559a536fd2a0dc00abfaf72eca46-merged.mount: Deactivated successfully.
Jan 29 09:14:18 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 29 09:14:18 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 29 09:14:18 compute-0 podman[95532]: 2026-01-29 09:14:18.814501404 +0000 UTC m=+0.831324327 container remove cec7c90869ba01ffb98c2e82c3bf0f2b2a022e875b93bf4bf4bd6a95c50bfb4f (image=quay.io/ceph/ceph:v20, name=relaxed_mahavira, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 29 09:14:18 compute-0 systemd[1]: libpod-conmon-cec7c90869ba01ffb98c2e82c3bf0f2b2a022e875b93bf4bf4bd6a95c50bfb4f.scope: Deactivated successfully.
Jan 29 09:14:18 compute-0 sudo[95522]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]: {
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:     "0": [
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:         {
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "devices": [
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "/dev/loop3"
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             ],
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "lv_name": "ceph_lv0",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "lv_size": "21470642176",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "name": "ceph_lv0",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "tags": {
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.cluster_name": "ceph",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.crush_device_class": "",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.encrypted": "0",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.objectstore": "bluestore",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.osd_id": "0",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.type": "block",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.vdo": "0",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.with_tpm": "0"
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             },
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "type": "block",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "vg_name": "ceph_vg0"
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:         }
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:     ],
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:     "1": [
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:         {
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "devices": [
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "/dev/loop4"
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             ],
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "lv_name": "ceph_lv1",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "lv_size": "21470642176",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "name": "ceph_lv1",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "tags": {
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.cluster_name": "ceph",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.crush_device_class": "",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.encrypted": "0",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.objectstore": "bluestore",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.osd_id": "1",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.type": "block",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.vdo": "0",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.with_tpm": "0"
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             },
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "type": "block",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "vg_name": "ceph_vg1"
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:         }
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:     ],
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:     "2": [
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:         {
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "devices": [
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "/dev/loop5"
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             ],
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "lv_name": "ceph_lv2",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "lv_size": "21470642176",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "name": "ceph_lv2",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "tags": {
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.cluster_name": "ceph",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.crush_device_class": "",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.encrypted": "0",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.objectstore": "bluestore",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.osd_id": "2",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.type": "block",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.vdo": "0",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:                 "ceph.with_tpm": "0"
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             },
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "type": "block",
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:             "vg_name": "ceph_vg2"
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:         }
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]:     ]
Jan 29 09:14:18 compute-0 gifted_ramanujan[95632]: }
Jan 29 09:14:19 compute-0 systemd[1]: libpod-6ebf3f43d2d55439fe9f4fa00d258ce7ba8162d4d6a9955472fea118f1c85dbb.scope: Deactivated successfully.
Jan 29 09:14:19 compute-0 podman[95616]: 2026-01-29 09:14:19.002226595 +0000 UTC m=+0.525468899 container died 6ebf3f43d2d55439fe9f4fa00d258ce7ba8162d4d6a9955472fea118f1c85dbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ramanujan, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 09:14:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3a874874e89c1168b77ce9acd7e22473f622492ccc47801ea63f38a25b092e1-merged.mount: Deactivated successfully.
Jan 29 09:14:19 compute-0 podman[95616]: 2026-01-29 09:14:19.095652418 +0000 UTC m=+0.618894712 container remove 6ebf3f43d2d55439fe9f4fa00d258ce7ba8162d4d6a9955472fea118f1c85dbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ramanujan, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 29 09:14:19 compute-0 systemd[1]: libpod-conmon-6ebf3f43d2d55439fe9f4fa00d258ce7ba8162d4d6a9955472fea118f1c85dbb.scope: Deactivated successfully.
Jan 29 09:14:19 compute-0 sudo[95474]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:19 compute-0 sudo[95667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:14:19 compute-0 sudo[95667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:19 compute-0 sudo[95667]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:19 compute-0 sudo[95692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:14:19 compute-0 sudo[95692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:19 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 29 09:14:19 compute-0 ceph-mon[75183]: 3.19 scrub starts
Jan 29 09:14:19 compute-0 ceph-mon[75183]: 3.19 scrub ok
Jan 29 09:14:19 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1051010067' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 29 09:14:19 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 29 09:14:19 compute-0 sudo[95765]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grmbssnieleykcuodgbjfgdviwdqrvcg ; /usr/bin/python3'
Jan 29 09:14:19 compute-0 sudo[95765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:14:19 compute-0 podman[95728]: 2026-01-29 09:14:19.55471083 +0000 UTC m=+0.025112787 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:14:19 compute-0 podman[95728]: 2026-01-29 09:14:19.675010096 +0000 UTC m=+0.145412023 container create 18c5e1178b215c6d7d4efed2fe56a35ac7ecddc6df1f017dc7b26c03a6686384 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 09:14:19 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 29 09:14:19 compute-0 systemd[1]: Started libpod-conmon-18c5e1178b215c6d7d4efed2fe56a35ac7ecddc6df1f017dc7b26c03a6686384.scope.
Jan 29 09:14:19 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 29 09:14:19 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:19 compute-0 python3[95767]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:14:19 compute-0 podman[95728]: 2026-01-29 09:14:19.871972796 +0000 UTC m=+0.342374753 container init 18c5e1178b215c6d7d4efed2fe56a35ac7ecddc6df1f017dc7b26c03a6686384 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:14:19 compute-0 podman[95728]: 2026-01-29 09:14:19.879604541 +0000 UTC m=+0.350006468 container start 18c5e1178b215c6d7d4efed2fe56a35ac7ecddc6df1f017dc7b26c03a6686384 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_brattain, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:14:19 compute-0 peaceful_brattain[95771]: 167 167
Jan 29 09:14:19 compute-0 systemd[1]: libpod-18c5e1178b215c6d7d4efed2fe56a35ac7ecddc6df1f017dc7b26c03a6686384.scope: Deactivated successfully.
Jan 29 09:14:19 compute-0 podman[95728]: 2026-01-29 09:14:19.888001147 +0000 UTC m=+0.358403104 container attach 18c5e1178b215c6d7d4efed2fe56a35ac7ecddc6df1f017dc7b26c03a6686384 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_brattain, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:14:19 compute-0 podman[95728]: 2026-01-29 09:14:19.888380127 +0000 UTC m=+0.358782054 container died 18c5e1178b215c6d7d4efed2fe56a35ac7ecddc6df1f017dc7b26c03a6686384 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_brattain, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:14:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4821dca5a34ca14e0d205d72034fb617bdbca87e96a067bdf16415d743d4344-merged.mount: Deactivated successfully.
Jan 29 09:14:19 compute-0 podman[95774]: 2026-01-29 09:14:19.938060224 +0000 UTC m=+0.095658075 container create 43fd1eaf2eccebb07c751a46c0857bcf3c7d4661ec943343292750872de7c86d (image=quay.io/ceph/ceph:v20, name=keen_lederberg, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 29 09:14:19 compute-0 podman[95728]: 2026-01-29 09:14:19.95758624 +0000 UTC m=+0.427988157 container remove 18c5e1178b215c6d7d4efed2fe56a35ac7ecddc6df1f017dc7b26c03a6686384 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_brattain, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 29 09:14:19 compute-0 systemd[1]: libpod-conmon-18c5e1178b215c6d7d4efed2fe56a35ac7ecddc6df1f017dc7b26c03a6686384.scope: Deactivated successfully.
Jan 29 09:14:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v109: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 29 09:14:19 compute-0 podman[95774]: 2026-01-29 09:14:19.888727517 +0000 UTC m=+0.046325388 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:14:20 compute-0 systemd[1]: Started libpod-conmon-43fd1eaf2eccebb07c751a46c0857bcf3c7d4661ec943343292750872de7c86d.scope.
Jan 29 09:14:20 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b3333dedae1c687b97d52b05ef725634b2f1a175a7cc57909328eb9eaaff7f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b3333dedae1c687b97d52b05ef725634b2f1a175a7cc57909328eb9eaaff7f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:20 compute-0 podman[95774]: 2026-01-29 09:14:20.058967097 +0000 UTC m=+0.216564978 container init 43fd1eaf2eccebb07c751a46c0857bcf3c7d4661ec943343292750872de7c86d (image=quay.io/ceph/ceph:v20, name=keen_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 29 09:14:20 compute-0 podman[95774]: 2026-01-29 09:14:20.066353966 +0000 UTC m=+0.223951817 container start 43fd1eaf2eccebb07c751a46c0857bcf3c7d4661ec943343292750872de7c86d (image=quay.io/ceph/ceph:v20, name=keen_lederberg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 09:14:20 compute-0 podman[95774]: 2026-01-29 09:14:20.08249644 +0000 UTC m=+0.240094291 container attach 43fd1eaf2eccebb07c751a46c0857bcf3c7d4661ec943343292750872de7c86d (image=quay.io/ceph/ceph:v20, name=keen_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:14:20 compute-0 podman[95812]: 2026-01-29 09:14:20.125897058 +0000 UTC m=+0.065704839 container create 10be92cb18a161386d376cb46d80e16ec02cc4516fab75af148e9839ab859fba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_montalcini, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:14:20 compute-0 systemd[1]: Started libpod-conmon-10be92cb18a161386d376cb46d80e16ec02cc4516fab75af148e9839ab859fba.scope.
Jan 29 09:14:20 compute-0 podman[95812]: 2026-01-29 09:14:20.092842349 +0000 UTC m=+0.032650150 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:14:20 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3735b5e732ed2bf6c8bd1d3661e5f2d1c3f105db9197a2383db5ba83a16b67ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3735b5e732ed2bf6c8bd1d3661e5f2d1c3f105db9197a2383db5ba83a16b67ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3735b5e732ed2bf6c8bd1d3661e5f2d1c3f105db9197a2383db5ba83a16b67ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3735b5e732ed2bf6c8bd1d3661e5f2d1c3f105db9197a2383db5ba83a16b67ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:20 compute-0 podman[95812]: 2026-01-29 09:14:20.226479784 +0000 UTC m=+0.166287595 container init 10be92cb18a161386d376cb46d80e16ec02cc4516fab75af148e9839ab859fba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:14:20 compute-0 podman[95812]: 2026-01-29 09:14:20.235668731 +0000 UTC m=+0.175476512 container start 10be92cb18a161386d376cb46d80e16ec02cc4516fab75af148e9839ab859fba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_montalcini, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:14:20 compute-0 podman[95812]: 2026-01-29 09:14:20.240406379 +0000 UTC m=+0.180214190 container attach 10be92cb18a161386d376cb46d80e16ec02cc4516fab75af148e9839ab859fba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 29 09:14:20 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 29 09:14:20 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 29 09:14:20 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Jan 29 09:14:20 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3995437223' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 29 09:14:20 compute-0 keen_lederberg[95804]: mimic
Jan 29 09:14:20 compute-0 systemd[1]: libpod-43fd1eaf2eccebb07c751a46c0857bcf3c7d4661ec943343292750872de7c86d.scope: Deactivated successfully.
Jan 29 09:14:20 compute-0 podman[95774]: 2026-01-29 09:14:20.569188244 +0000 UTC m=+0.726786095 container died 43fd1eaf2eccebb07c751a46c0857bcf3c7d4661ec943343292750872de7c86d (image=quay.io/ceph/ceph:v20, name=keen_lederberg, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:14:20 compute-0 ceph-mon[75183]: 2.10 scrub starts
Jan 29 09:14:20 compute-0 ceph-mon[75183]: 2.10 scrub ok
Jan 29 09:14:20 compute-0 ceph-mon[75183]: 4.15 scrub starts
Jan 29 09:14:20 compute-0 ceph-mon[75183]: 4.15 scrub ok
Jan 29 09:14:20 compute-0 ceph-mon[75183]: pgmap v109: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 29 09:14:20 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3995437223' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 29 09:14:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8b3333dedae1c687b97d52b05ef725634b2f1a175a7cc57909328eb9eaaff7f-merged.mount: Deactivated successfully.
Jan 29 09:14:20 compute-0 podman[95774]: 2026-01-29 09:14:20.648924199 +0000 UTC m=+0.806522060 container remove 43fd1eaf2eccebb07c751a46c0857bcf3c7d4661ec943343292750872de7c86d (image=quay.io/ceph/ceph:v20, name=keen_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:14:20 compute-0 systemd[1]: libpod-conmon-43fd1eaf2eccebb07c751a46c0857bcf3c7d4661ec943343292750872de7c86d.scope: Deactivated successfully.
Jan 29 09:14:20 compute-0 sudo[95765]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:20 compute-0 lvm[95938]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:14:20 compute-0 lvm[95938]: VG ceph_vg0 finished
Jan 29 09:14:20 compute-0 lvm[95941]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:14:20 compute-0 lvm[95941]: VG ceph_vg1 finished
Jan 29 09:14:20 compute-0 lvm[95943]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:14:20 compute-0 lvm[95943]: VG ceph_vg2 finished
Jan 29 09:14:21 compute-0 dreamy_montalcini[95831]: {}
Jan 29 09:14:21 compute-0 systemd[1]: libpod-10be92cb18a161386d376cb46d80e16ec02cc4516fab75af148e9839ab859fba.scope: Deactivated successfully.
Jan 29 09:14:21 compute-0 systemd[1]: libpod-10be92cb18a161386d376cb46d80e16ec02cc4516fab75af148e9839ab859fba.scope: Consumed 1.270s CPU time.
Jan 29 09:14:21 compute-0 podman[95946]: 2026-01-29 09:14:21.159824765 +0000 UTC m=+0.025705653 container died 10be92cb18a161386d376cb46d80e16ec02cc4516fab75af148e9839ab859fba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:14:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3735b5e732ed2bf6c8bd1d3661e5f2d1c3f105db9197a2383db5ba83a16b67ce-merged.mount: Deactivated successfully.
Jan 29 09:14:21 compute-0 podman[95946]: 2026-01-29 09:14:21.332427638 +0000 UTC m=+0.198308526 container remove 10be92cb18a161386d376cb46d80e16ec02cc4516fab75af148e9839ab859fba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_montalcini, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:14:21 compute-0 systemd[1]: libpod-conmon-10be92cb18a161386d376cb46d80e16ec02cc4516fab75af148e9839ab859fba.scope: Deactivated successfully.
Jan 29 09:14:21 compute-0 sudo[95692]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:21 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:14:21 compute-0 sudo[95985]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olyplfmgqlrtrucljuaropuzbetadkjl ; /usr/bin/python3'
Jan 29 09:14:21 compute-0 sudo[95985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:14:21 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:21 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:14:21 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:21 compute-0 python3[95987]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:14:21 compute-0 sudo[95988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:14:21 compute-0 sudo[95988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:14:21 compute-0 sudo[95988]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:21 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Jan 29 09:14:21 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Jan 29 09:14:21 compute-0 ceph-mon[75183]: 5.17 scrub starts
Jan 29 09:14:21 compute-0 ceph-mon[75183]: 5.17 scrub ok
Jan 29 09:14:21 compute-0 ceph-mon[75183]: 3.1a scrub starts
Jan 29 09:14:21 compute-0 ceph-mon[75183]: 3.1a scrub ok
Jan 29 09:14:21 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:21 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:14:21 compute-0 podman[96011]: 2026-01-29 09:14:21.70902608 +0000 UTC m=+0.110732379 container create d81563f156d449f11479930f6f85402aab342d64e5ce07b3195ea70cc04b2c02 (image=quay.io/ceph/ceph:v20, name=pedantic_easley, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:14:21 compute-0 podman[96011]: 2026-01-29 09:14:21.625392671 +0000 UTC m=+0.027098990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 09:14:21 compute-0 systemd[1]: Started libpod-conmon-d81563f156d449f11479930f6f85402aab342d64e5ce07b3195ea70cc04b2c02.scope.
Jan 29 09:14:21 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e80adcd37b90ee8e84249fbd759f2d8e39f313bc490f12649c7744a0e38a6fb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e80adcd37b90ee8e84249fbd759f2d8e39f313bc490f12649c7744a0e38a6fb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:14:21 compute-0 podman[96011]: 2026-01-29 09:14:21.815568266 +0000 UTC m=+0.217274585 container init d81563f156d449f11479930f6f85402aab342d64e5ce07b3195ea70cc04b2c02 (image=quay.io/ceph/ceph:v20, name=pedantic_easley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:14:21 compute-0 podman[96011]: 2026-01-29 09:14:21.824596839 +0000 UTC m=+0.226303138 container start d81563f156d449f11479930f6f85402aab342d64e5ce07b3195ea70cc04b2c02 (image=quay.io/ceph/ceph:v20, name=pedantic_easley, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:14:21 compute-0 podman[96011]: 2026-01-29 09:14:21.8302021 +0000 UTC m=+0.231908419 container attach d81563f156d449f11479930f6f85402aab342d64e5ce07b3195ea70cc04b2c02 (image=quay.io/ceph/ceph:v20, name=pedantic_easley, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:14:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v110: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 29 09:14:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Jan 29 09:14:22 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3827634747' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 29 09:14:22 compute-0 pedantic_easley[96029]: 
Jan 29 09:14:22 compute-0 pedantic_easley[96029]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":6}}
Jan 29 09:14:22 compute-0 systemd[1]: libpod-d81563f156d449f11479930f6f85402aab342d64e5ce07b3195ea70cc04b2c02.scope: Deactivated successfully.
Jan 29 09:14:22 compute-0 podman[96011]: 2026-01-29 09:14:22.372299525 +0000 UTC m=+0.774005824 container died d81563f156d449f11479930f6f85402aab342d64e5ce07b3195ea70cc04b2c02 (image=quay.io/ceph/ceph:v20, name=pedantic_easley, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:14:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e80adcd37b90ee8e84249fbd759f2d8e39f313bc490f12649c7744a0e38a6fb-merged.mount: Deactivated successfully.
Jan 29 09:14:22 compute-0 podman[96011]: 2026-01-29 09:14:22.422846355 +0000 UTC m=+0.824552644 container remove d81563f156d449f11479930f6f85402aab342d64e5ce07b3195ea70cc04b2c02 (image=quay.io/ceph/ceph:v20, name=pedantic_easley, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:14:22 compute-0 systemd[1]: libpod-conmon-d81563f156d449f11479930f6f85402aab342d64e5ce07b3195ea70cc04b2c02.scope: Deactivated successfully.
Jan 29 09:14:22 compute-0 sudo[95985]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:22 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Jan 29 09:14:22 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Jan 29 09:14:22 compute-0 ceph-mon[75183]: 6.16 scrub starts
Jan 29 09:14:22 compute-0 ceph-mon[75183]: 6.16 scrub ok
Jan 29 09:14:22 compute-0 ceph-mon[75183]: pgmap v110: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 29 09:14:22 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3827634747' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 29 09:14:22 compute-0 ceph-mon[75183]: 6.10 scrub starts
Jan 29 09:14:22 compute-0 ceph-mon[75183]: 6.10 scrub ok
Jan 29 09:14:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:14:23 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 29 09:14:23 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 29 09:14:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v111: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 29 09:14:24 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Jan 29 09:14:24 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Jan 29 09:14:24 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 29 09:14:24 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 29 09:14:25 compute-0 ceph-mon[75183]: 7.12 scrub starts
Jan 29 09:14:25 compute-0 ceph-mon[75183]: 7.12 scrub ok
Jan 29 09:14:25 compute-0 ceph-mon[75183]: pgmap v111: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 29 09:14:25 compute-0 ceph-mon[75183]: 6.12 scrub starts
Jan 29 09:14:25 compute-0 ceph-mon[75183]: 6.12 scrub ok
Jan 29 09:14:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v112: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:26 compute-0 ceph-mon[75183]: 5.8 scrub starts
Jan 29 09:14:26 compute-0 ceph-mon[75183]: 5.8 scrub ok
Jan 29 09:14:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:14:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:14:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:14:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:14:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:14:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:14:26 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 29 09:14:26 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 29 09:14:27 compute-0 ceph-mon[75183]: pgmap v112: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:27 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 29 09:14:27 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 29 09:14:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v113: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:28 compute-0 ceph-mon[75183]: 2.e scrub starts
Jan 29 09:14:28 compute-0 ceph-mon[75183]: 2.e scrub ok
Jan 29 09:14:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:14:29 compute-0 ceph-mon[75183]: 2.1a scrub starts
Jan 29 09:14:29 compute-0 ceph-mon[75183]: 2.1a scrub ok
Jan 29 09:14:29 compute-0 ceph-mon[75183]: pgmap v113: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:29 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 29 09:14:29 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 29 09:14:29 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 29 09:14:29 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 29 09:14:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v114: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:30 compute-0 ceph-mon[75183]: 3.14 scrub starts
Jan 29 09:14:30 compute-0 ceph-mon[75183]: 3.14 scrub ok
Jan 29 09:14:30 compute-0 ceph-mon[75183]: pgmap v114: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:30 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 29 09:14:30 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 29 09:14:31 compute-0 ceph-mon[75183]: 2.c scrub starts
Jan 29 09:14:31 compute-0 ceph-mon[75183]: 2.c scrub ok
Jan 29 09:14:31 compute-0 ceph-mon[75183]: 4.c scrub starts
Jan 29 09:14:31 compute-0 ceph-mon[75183]: 4.c scrub ok
Jan 29 09:14:31 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 29 09:14:31 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 29 09:14:31 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 29 09:14:31 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 29 09:14:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v115: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:32 compute-0 ceph-mon[75183]: 7.10 scrub starts
Jan 29 09:14:32 compute-0 ceph-mon[75183]: 7.10 scrub ok
Jan 29 09:14:32 compute-0 ceph-mon[75183]: pgmap v115: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:32 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 29 09:14:32 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 29 09:14:32 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 29 09:14:32 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 29 09:14:32 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 29 09:14:32 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 29 09:14:33 compute-0 ceph-mon[75183]: 5.a scrub starts
Jan 29 09:14:33 compute-0 ceph-mon[75183]: 5.a scrub ok
Jan 29 09:14:33 compute-0 ceph-mon[75183]: 3.13 scrub starts
Jan 29 09:14:33 compute-0 ceph-mon[75183]: 3.13 scrub ok
Jan 29 09:14:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:14:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v116: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:34 compute-0 ceph-mon[75183]: 4.0 scrub starts
Jan 29 09:14:34 compute-0 ceph-mon[75183]: 4.0 scrub ok
Jan 29 09:14:34 compute-0 ceph-mon[75183]: 5.b scrub starts
Jan 29 09:14:34 compute-0 ceph-mon[75183]: 5.b scrub ok
Jan 29 09:14:34 compute-0 ceph-mon[75183]: pgmap v116: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:34 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 29 09:14:34 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 29 09:14:35 compute-0 ceph-mon[75183]: 5.0 scrub starts
Jan 29 09:14:35 compute-0 ceph-mon[75183]: 5.0 scrub ok
Jan 29 09:14:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v117: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:36 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 29 09:14:36 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 29 09:14:36 compute-0 ceph-mon[75183]: pgmap v117: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:36 compute-0 sshd-session[96066]: Accepted publickey for zuul from 192.168.122.30 port 39304 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:14:36 compute-0 systemd-logind[799]: New session 33 of user zuul.
Jan 29 09:14:36 compute-0 systemd[1]: Started Session 33 of User zuul.
Jan 29 09:14:36 compute-0 sshd-session[96066]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:14:37 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 29 09:14:37 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 29 09:14:37 compute-0 ceph-mon[75183]: 7.17 scrub starts
Jan 29 09:14:37 compute-0 ceph-mon[75183]: 7.17 scrub ok
Jan 29 09:14:37 compute-0 ceph-mon[75183]: 6.0 scrub starts
Jan 29 09:14:37 compute-0 ceph-mon[75183]: 6.0 scrub ok
Jan 29 09:14:37 compute-0 python3.9[96219]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:14:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v118: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:14:38 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 29 09:14:38 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 29 09:14:38 compute-0 ceph-mon[75183]: pgmap v118: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:38 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 29 09:14:38 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 29 09:14:39 compute-0 sudo[96435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfolbfmrcskltdiipavoatxggqmjojbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678078.9405367-27-89444744684243/AnsiballZ_command.py'
Jan 29 09:14:39 compute-0 sudo[96435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:14:39 compute-0 python3.9[96437]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:14:39 compute-0 ceph-mon[75183]: 7.16 scrub starts
Jan 29 09:14:39 compute-0 ceph-mon[75183]: 7.16 scrub ok
Jan 29 09:14:39 compute-0 ceph-mon[75183]: 2.1 scrub starts
Jan 29 09:14:39 compute-0 ceph-mon[75183]: 2.1 scrub ok
Jan 29 09:14:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v119: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:40 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 29 09:14:40 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 29 09:14:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 29 09:14:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 29 09:14:40 compute-0 ceph-mon[75183]: pgmap v119: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:41 compute-0 ceph-mon[75183]: 3.10 scrub starts
Jan 29 09:14:41 compute-0 ceph-mon[75183]: 3.10 scrub ok
Jan 29 09:14:41 compute-0 ceph-mon[75183]: 5.6 scrub starts
Jan 29 09:14:41 compute-0 ceph-mon[75183]: 5.6 scrub ok
Jan 29 09:14:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v120: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:42 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 29 09:14:42 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 29 09:14:42 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 29 09:14:42 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 29 09:14:42 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 29 09:14:42 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 29 09:14:42 compute-0 ceph-mon[75183]: pgmap v120: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:14:43 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 29 09:14:43 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 29 09:14:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v121: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:44 compute-0 ceph-mon[75183]: 7.14 scrub starts
Jan 29 09:14:44 compute-0 ceph-mon[75183]: 7.14 scrub ok
Jan 29 09:14:44 compute-0 ceph-mon[75183]: 6.3 scrub starts
Jan 29 09:14:44 compute-0 ceph-mon[75183]: 6.3 scrub ok
Jan 29 09:14:44 compute-0 ceph-mon[75183]: 5.e scrub starts
Jan 29 09:14:44 compute-0 ceph-mon[75183]: 5.e scrub ok
Jan 29 09:14:44 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 29 09:14:44 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 29 09:14:45 compute-0 ceph-mon[75183]: 4.3 scrub starts
Jan 29 09:14:45 compute-0 ceph-mon[75183]: 4.3 scrub ok
Jan 29 09:14:45 compute-0 ceph-mon[75183]: pgmap v121: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 29 09:14:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 29 09:14:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v122: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:46 compute-0 ceph-mon[75183]: 5.d scrub starts
Jan 29 09:14:46 compute-0 ceph-mon[75183]: 5.d scrub ok
Jan 29 09:14:46 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 29 09:14:46 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 29 09:14:47 compute-0 ceph-mon[75183]: 7.b scrub starts
Jan 29 09:14:47 compute-0 ceph-mon[75183]: 7.b scrub ok
Jan 29 09:14:47 compute-0 ceph-mon[75183]: pgmap v122: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:47 compute-0 sudo[96435]: pam_unix(sudo:session): session closed for user root
Jan 29 09:14:47 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Jan 29 09:14:47 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Jan 29 09:14:47 compute-0 sshd-session[96069]: Connection closed by 192.168.122.30 port 39304
Jan 29 09:14:47 compute-0 sshd-session[96066]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:14:47 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Jan 29 09:14:47 compute-0 systemd[1]: session-33.scope: Consumed 8.854s CPU time.
Jan 29 09:14:47 compute-0 systemd-logind[799]: Session 33 logged out. Waiting for processes to exit.
Jan 29 09:14:47 compute-0 systemd-logind[799]: Removed session 33.
Jan 29 09:14:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v123: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:14:48 compute-0 ceph-mon[75183]: 3.d scrub starts
Jan 29 09:14:48 compute-0 ceph-mon[75183]: 3.d scrub ok
Jan 29 09:14:48 compute-0 ceph-mon[75183]: 6.1b scrub starts
Jan 29 09:14:48 compute-0 ceph-mon[75183]: 6.1b scrub ok
Jan 29 09:14:48 compute-0 ceph-mon[75183]: pgmap v123: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:48 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 29 09:14:48 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 29 09:14:48 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 29 09:14:48 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 29 09:14:49 compute-0 ceph-mon[75183]: 3.b scrub starts
Jan 29 09:14:49 compute-0 ceph-mon[75183]: 3.b scrub ok
Jan 29 09:14:49 compute-0 ceph-mon[75183]: 4.19 scrub starts
Jan 29 09:14:49 compute-0 ceph-mon[75183]: 4.19 scrub ok
Jan 29 09:14:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v124: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:50 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 29 09:14:50 compute-0 ceph-mon[75183]: pgmap v124: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:50 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 29 09:14:51 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 29 09:14:51 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 29 09:14:51 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Jan 29 09:14:51 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Jan 29 09:14:51 compute-0 ceph-mon[75183]: 5.1b scrub starts
Jan 29 09:14:51 compute-0 ceph-mon[75183]: 5.1b scrub ok
Jan 29 09:14:51 compute-0 ceph-mon[75183]: 6.18 scrub starts
Jan 29 09:14:51 compute-0 ceph-mon[75183]: 6.18 scrub ok
Jan 29 09:14:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v125: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:52 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 29 09:14:52 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 29 09:14:53 compute-0 ceph-mon[75183]: 3.2 scrub starts
Jan 29 09:14:53 compute-0 ceph-mon[75183]: 3.2 scrub ok
Jan 29 09:14:53 compute-0 ceph-mon[75183]: pgmap v125: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:14:53 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 29 09:14:53 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 29 09:14:53 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 29 09:14:53 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 29 09:14:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v126: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:54 compute-0 ceph-mon[75183]: 2.1e scrub starts
Jan 29 09:14:54 compute-0 ceph-mon[75183]: 2.1e scrub ok
Jan 29 09:14:54 compute-0 ceph-mon[75183]: 6.7 scrub starts
Jan 29 09:14:54 compute-0 ceph-mon[75183]: 6.7 scrub ok
Jan 29 09:14:54 compute-0 ceph-mon[75183]: pgmap v126: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:54 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 29 09:14:54 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 29 09:14:55 compute-0 ceph-mon[75183]: 2.0 scrub starts
Jan 29 09:14:55 compute-0 ceph-mon[75183]: 2.0 scrub ok
Jan 29 09:14:55 compute-0 ceph-mon[75183]: 3.0 scrub starts
Jan 29 09:14:55 compute-0 ceph-mon[75183]: 3.0 scrub ok
Jan 29 09:14:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:14:55
Jan 29 09:14:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:14:55 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:14:55 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'images', 'cephfs.cephfs.meta', '.mgr', 'vms', 'volumes']
Jan 29 09:14:55 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:14:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v127: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:56 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 29 09:14:56 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 29 09:14:56 compute-0 ceph-mon[75183]: pgmap v127: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:14:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:14:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:14:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:14:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:14:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:14:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:14:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:14:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:14:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:14:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:14:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:14:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:14:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:14:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:14:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:14:56 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 29 09:14:56 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 29 09:14:57 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 29 09:14:57 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 29 09:14:57 compute-0 ceph-mon[75183]: 7.0 scrub starts
Jan 29 09:14:57 compute-0 ceph-mon[75183]: 7.0 scrub ok
Jan 29 09:14:57 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 29 09:14:57 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 29 09:14:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v128: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:14:58 compute-0 ceph-mon[75183]: 6.f scrub starts
Jan 29 09:14:58 compute-0 ceph-mon[75183]: 6.f scrub ok
Jan 29 09:14:58 compute-0 ceph-mon[75183]: 3.4 scrub starts
Jan 29 09:14:58 compute-0 ceph-mon[75183]: 3.4 scrub ok
Jan 29 09:14:58 compute-0 ceph-mon[75183]: pgmap v128: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:14:59 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 29 09:14:59 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 29 09:14:59 compute-0 ceph-mon[75183]: 4.1a scrub starts
Jan 29 09:14:59 compute-0 ceph-mon[75183]: 4.1a scrub ok
Jan 29 09:14:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v129: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:00 compute-0 ceph-mon[75183]: 7.7 scrub starts
Jan 29 09:15:00 compute-0 ceph-mon[75183]: 7.7 scrub ok
Jan 29 09:15:00 compute-0 ceph-mon[75183]: pgmap v129: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:15:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:15:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:15:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:15:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:15:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:15:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:15:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:15:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:15:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:15:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:15:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:15:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0578630957479565e-06 of space, bias 4.0, pg target 0.0012694357148975478 quantized to 16 (current 32)
Jan 29 09:15:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:15:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:15:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v130: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:02 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Jan 29 09:15:02 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Jan 29 09:15:03 compute-0 ceph-mon[75183]: pgmap v130: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:15:03 compute-0 sshd-session[96494]: Accepted publickey for zuul from 192.168.122.30 port 57714 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:15:03 compute-0 systemd-logind[799]: New session 34 of user zuul.
Jan 29 09:15:03 compute-0 systemd[1]: Started Session 34 of User zuul.
Jan 29 09:15:03 compute-0 sshd-session[96494]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:15:03 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 29 09:15:03 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 29 09:15:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v131: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:04 compute-0 ceph-mon[75183]: 6.19 scrub starts
Jan 29 09:15:04 compute-0 ceph-mon[75183]: 6.19 scrub ok
Jan 29 09:15:04 compute-0 python3.9[96647]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 29 09:15:04 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 29 09:15:04 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 29 09:15:05 compute-0 ceph-mon[75183]: 4.18 scrub starts
Jan 29 09:15:05 compute-0 ceph-mon[75183]: 4.18 scrub ok
Jan 29 09:15:05 compute-0 ceph-mon[75183]: pgmap v131: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:05 compute-0 python3.9[96821]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:15:05 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 29 09:15:05 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 29 09:15:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v132: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:06 compute-0 sudo[96975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqipreyrjpjmdbdzcvdtmhgentzvmmkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678105.892841-40-9320026057081/AnsiballZ_command.py'
Jan 29 09:15:06 compute-0 sudo[96975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:15:06 compute-0 ceph-mon[75183]: 4.e scrub starts
Jan 29 09:15:06 compute-0 ceph-mon[75183]: 4.e scrub ok
Jan 29 09:15:06 compute-0 ceph-mon[75183]: pgmap v132: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:06 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 29 09:15:06 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 29 09:15:06 compute-0 python3.9[96977]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:15:06 compute-0 sudo[96975]: pam_unix(sudo:session): session closed for user root
Jan 29 09:15:07 compute-0 sudo[97128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdxjethoeipyfzqxjunhyeviyqblgoel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678106.7923431-52-110384603827423/AnsiballZ_stat.py'
Jan 29 09:15:07 compute-0 sudo[97128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:15:07 compute-0 python3.9[97130]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:15:07 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 29 09:15:07 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 29 09:15:07 compute-0 sudo[97128]: pam_unix(sudo:session): session closed for user root
Jan 29 09:15:07 compute-0 ceph-mon[75183]: 4.b scrub starts
Jan 29 09:15:07 compute-0 ceph-mon[75183]: 4.b scrub ok
Jan 29 09:15:07 compute-0 ceph-mon[75183]: 7.d scrub starts
Jan 29 09:15:07 compute-0 ceph-mon[75183]: 7.d scrub ok
Jan 29 09:15:07 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 29 09:15:07 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 29 09:15:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v133: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:08 compute-0 sudo[97282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmjxmtshklbulqorwpbuiypcenqcqcew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678107.6649826-63-240966219808996/AnsiballZ_file.py'
Jan 29 09:15:08 compute-0 sudo[97282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:15:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:15:08 compute-0 python3.9[97284]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:15:08 compute-0 sudo[97282]: pam_unix(sudo:session): session closed for user root
Jan 29 09:15:08 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 29 09:15:08 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 29 09:15:08 compute-0 ceph-mon[75183]: 3.1c scrub starts
Jan 29 09:15:08 compute-0 ceph-mon[75183]: 3.1c scrub ok
Jan 29 09:15:08 compute-0 ceph-mon[75183]: pgmap v133: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:09 compute-0 sudo[97434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxbvviqravualmcfiqetrwmhdsbajdoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678108.6815221-72-46667918860850/AnsiballZ_file.py'
Jan 29 09:15:09 compute-0 sudo[97434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:15:09 compute-0 python3.9[97436]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:15:09 compute-0 sudo[97434]: pam_unix(sudo:session): session closed for user root
Jan 29 09:15:09 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 29 09:15:09 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 29 09:15:09 compute-0 ceph-mon[75183]: 6.9 scrub starts
Jan 29 09:15:09 compute-0 ceph-mon[75183]: 6.9 scrub ok
Jan 29 09:15:09 compute-0 ceph-mon[75183]: 4.1 scrub starts
Jan 29 09:15:09 compute-0 ceph-mon[75183]: 4.1 scrub ok
Jan 29 09:15:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v134: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:10 compute-0 python3.9[97586]: ansible-ansible.builtin.service_facts Invoked
Jan 29 09:15:10 compute-0 network[97603]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 29 09:15:10 compute-0 network[97604]: 'network-scripts' will be removed from distribution in near future.
Jan 29 09:15:10 compute-0 network[97605]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 29 09:15:10 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 29 09:15:10 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 29 09:15:10 compute-0 ceph-mon[75183]: 6.5 scrub starts
Jan 29 09:15:10 compute-0 ceph-mon[75183]: 6.5 scrub ok
Jan 29 09:15:10 compute-0 ceph-mon[75183]: pgmap v134: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:11 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 29 09:15:11 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 29 09:15:11 compute-0 ceph-mon[75183]: 7.19 scrub starts
Jan 29 09:15:11 compute-0 ceph-mon[75183]: 7.19 scrub ok
Jan 29 09:15:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v135: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:13 compute-0 ceph-mon[75183]: 6.a scrub starts
Jan 29 09:15:13 compute-0 ceph-mon[75183]: 6.a scrub ok
Jan 29 09:15:13 compute-0 ceph-mon[75183]: pgmap v135: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:13 compute-0 python3.9[97865]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:15:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:15:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v136: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:14 compute-0 python3.9[98015]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:15:14 compute-0 ceph-mon[75183]: pgmap v136: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:14 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 29 09:15:14 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 29 09:15:15 compute-0 python3.9[98169]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:15:15 compute-0 sudo[98325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abtoxjeorrvrbmjyjujptksivqoekbii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678115.505669-120-244249806369367/AnsiballZ_setup.py'
Jan 29 09:15:15 compute-0 sudo[98325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:15:15 compute-0 ceph-mon[75183]: 4.1d scrub starts
Jan 29 09:15:15 compute-0 ceph-mon[75183]: 4.1d scrub ok
Jan 29 09:15:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v137: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:16 compute-0 python3.9[98327]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 09:15:16 compute-0 sudo[98325]: pam_unix(sudo:session): session closed for user root
Jan 29 09:15:16 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 29 09:15:16 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 29 09:15:16 compute-0 sudo[98409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nphlevvjmxjwsvlabmgonjnsgfdozebl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678115.505669-120-244249806369367/AnsiballZ_dnf.py'
Jan 29 09:15:16 compute-0 sudo[98409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:15:16 compute-0 ceph-mon[75183]: pgmap v137: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:16 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 29 09:15:16 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 29 09:15:16 compute-0 python3.9[98411]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:15:17 compute-0 ceph-mon[75183]: 4.d scrub starts
Jan 29 09:15:17 compute-0 ceph-mon[75183]: 4.d scrub ok
Jan 29 09:15:17 compute-0 ceph-mon[75183]: 4.1e scrub starts
Jan 29 09:15:17 compute-0 ceph-mon[75183]: 4.1e scrub ok
Jan 29 09:15:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v138: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:15:18 compute-0 ceph-mon[75183]: pgmap v138: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:18 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 29 09:15:18 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 29 09:15:19 compute-0 ceph-mon[75183]: 4.1f scrub starts
Jan 29 09:15:19 compute-0 ceph-mon[75183]: 4.1f scrub ok
Jan 29 09:15:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v139: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:20 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 29 09:15:20 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 29 09:15:20 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 29 09:15:20 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 29 09:15:20 compute-0 ceph-mon[75183]: pgmap v139: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:20 compute-0 ceph-mon[75183]: 4.a scrub starts
Jan 29 09:15:21 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Jan 29 09:15:21 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Jan 29 09:15:21 compute-0 sudo[98479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:15:21 compute-0 sudo[98479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:15:21 compute-0 sudo[98479]: pam_unix(sudo:session): session closed for user root
Jan 29 09:15:21 compute-0 sudo[98504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:15:21 compute-0 sudo[98504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:15:21 compute-0 ceph-mon[75183]: 6.c scrub starts
Jan 29 09:15:21 compute-0 ceph-mon[75183]: 6.c scrub ok
Jan 29 09:15:21 compute-0 ceph-mon[75183]: 4.a scrub ok
Jan 29 09:15:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v140: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:22 compute-0 sudo[98504]: pam_unix(sudo:session): session closed for user root
Jan 29 09:15:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:15:22 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:15:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:15:22 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:15:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:15:22 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:15:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:15:22 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:15:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:15:22 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:15:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:15:22 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:15:22 compute-0 sudo[98560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:15:22 compute-0 sudo[98560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:15:22 compute-0 sudo[98560]: pam_unix(sudo:session): session closed for user root
Jan 29 09:15:22 compute-0 sudo[98585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:15:22 compute-0 sudo[98585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:15:22 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 29 09:15:22 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 29 09:15:22 compute-0 podman[98622]: 2026-01-29 09:15:22.629025048 +0000 UTC m=+0.041115306 container create 6b186a44c7e8b819dcebd3130616a769a1ab02f633de179c95864ade7d652226 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 29 09:15:22 compute-0 systemd[1]: Started libpod-conmon-6b186a44c7e8b819dcebd3130616a769a1ab02f633de179c95864ade7d652226.scope.
Jan 29 09:15:22 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:15:22 compute-0 podman[98622]: 2026-01-29 09:15:22.609499209 +0000 UTC m=+0.021589497 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:15:22 compute-0 podman[98622]: 2026-01-29 09:15:22.716678507 +0000 UTC m=+0.128768795 container init 6b186a44c7e8b819dcebd3130616a769a1ab02f633de179c95864ade7d652226 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 29 09:15:22 compute-0 podman[98622]: 2026-01-29 09:15:22.724663768 +0000 UTC m=+0.136754026 container start 6b186a44c7e8b819dcebd3130616a769a1ab02f633de179c95864ade7d652226 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bhaskara, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:15:22 compute-0 frosty_bhaskara[98638]: 167 167
Jan 29 09:15:22 compute-0 podman[98622]: 2026-01-29 09:15:22.728833043 +0000 UTC m=+0.140923321 container attach 6b186a44c7e8b819dcebd3130616a769a1ab02f633de179c95864ade7d652226 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 29 09:15:22 compute-0 systemd[1]: libpod-6b186a44c7e8b819dcebd3130616a769a1ab02f633de179c95864ade7d652226.scope: Deactivated successfully.
Jan 29 09:15:22 compute-0 podman[98622]: 2026-01-29 09:15:22.733177423 +0000 UTC m=+0.145267681 container died 6b186a44c7e8b819dcebd3130616a769a1ab02f633de179c95864ade7d652226 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bhaskara, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:15:22 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 29 09:15:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cb8f0731cee2d7b31366308294cab8f72c9e6fd8a4c616e0fa8a4ad8674272d-merged.mount: Deactivated successfully.
Jan 29 09:15:22 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 29 09:15:22 compute-0 podman[98622]: 2026-01-29 09:15:22.776360324 +0000 UTC m=+0.188450582 container remove 6b186a44c7e8b819dcebd3130616a769a1ab02f633de179c95864ade7d652226 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bhaskara, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 09:15:22 compute-0 systemd[1]: libpod-conmon-6b186a44c7e8b819dcebd3130616a769a1ab02f633de179c95864ade7d652226.scope: Deactivated successfully.
Jan 29 09:15:22 compute-0 ceph-mon[75183]: 6.1e scrub starts
Jan 29 09:15:22 compute-0 ceph-mon[75183]: 6.1e scrub ok
Jan 29 09:15:22 compute-0 ceph-mon[75183]: pgmap v140: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:22 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:15:22 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:15:22 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:15:22 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:15:22 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:15:22 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:15:22 compute-0 podman[98661]: 2026-01-29 09:15:22.900574252 +0000 UTC m=+0.025459783 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:15:23 compute-0 podman[98661]: 2026-01-29 09:15:23.01206901 +0000 UTC m=+0.136954521 container create aa42c07be8ca43fd71abff36f01d7856f4abe6223a35299052eec038c253548f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_vaughan, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:15:23 compute-0 systemd[1]: Started libpod-conmon-aa42c07be8ca43fd71abff36f01d7856f4abe6223a35299052eec038c253548f.scope.
Jan 29 09:15:23 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d1aa763cb8ba92c3bf449c82af4177215ffa348b811b07d77eec653b4c90f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d1aa763cb8ba92c3bf449c82af4177215ffa348b811b07d77eec653b4c90f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d1aa763cb8ba92c3bf449c82af4177215ffa348b811b07d77eec653b4c90f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d1aa763cb8ba92c3bf449c82af4177215ffa348b811b07d77eec653b4c90f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d1aa763cb8ba92c3bf449c82af4177215ffa348b811b07d77eec653b4c90f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:15:23 compute-0 podman[98661]: 2026-01-29 09:15:23.162915763 +0000 UTC m=+0.287801304 container init aa42c07be8ca43fd71abff36f01d7856f4abe6223a35299052eec038c253548f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_vaughan, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:15:23 compute-0 podman[98661]: 2026-01-29 09:15:23.170719568 +0000 UTC m=+0.295605079 container start aa42c07be8ca43fd71abff36f01d7856f4abe6223a35299052eec038c253548f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_vaughan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:15:23 compute-0 podman[98661]: 2026-01-29 09:15:23.21572271 +0000 UTC m=+0.340608241 container attach aa42c07be8ca43fd71abff36f01d7856f4abe6223a35299052eec038c253548f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_vaughan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:15:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:15:23 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 29 09:15:23 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 29 09:15:23 compute-0 nervous_vaughan[98678]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:15:23 compute-0 nervous_vaughan[98678]: --> All data devices are unavailable
Jan 29 09:15:23 compute-0 systemd[1]: libpod-aa42c07be8ca43fd71abff36f01d7856f4abe6223a35299052eec038c253548f.scope: Deactivated successfully.
Jan 29 09:15:23 compute-0 podman[98661]: 2026-01-29 09:15:23.61898376 +0000 UTC m=+0.743869271 container died aa42c07be8ca43fd71abff36f01d7856f4abe6223a35299052eec038c253548f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 29 09:15:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-57d1aa763cb8ba92c3bf449c82af4177215ffa348b811b07d77eec653b4c90f4-merged.mount: Deactivated successfully.
Jan 29 09:15:23 compute-0 podman[98661]: 2026-01-29 09:15:23.720189643 +0000 UTC m=+0.845075154 container remove aa42c07be8ca43fd71abff36f01d7856f4abe6223a35299052eec038c253548f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 09:15:23 compute-0 systemd[1]: libpod-conmon-aa42c07be8ca43fd71abff36f01d7856f4abe6223a35299052eec038c253548f.scope: Deactivated successfully.
Jan 29 09:15:23 compute-0 sudo[98585]: pam_unix(sudo:session): session closed for user root
Jan 29 09:15:23 compute-0 sudo[98716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:15:23 compute-0 sudo[98716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:15:23 compute-0 sudo[98716]: pam_unix(sudo:session): session closed for user root
Jan 29 09:15:23 compute-0 ceph-mon[75183]: 6.d scrub starts
Jan 29 09:15:23 compute-0 ceph-mon[75183]: 6.d scrub ok
Jan 29 09:15:23 compute-0 ceph-mon[75183]: 4.6 scrub starts
Jan 29 09:15:23 compute-0 ceph-mon[75183]: 4.6 scrub ok
Jan 29 09:15:23 compute-0 sudo[98741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:15:23 compute-0 sudo[98741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:15:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v141: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:24 compute-0 podman[98779]: 2026-01-29 09:15:24.214999849 +0000 UTC m=+0.076712489 container create bf63d72ae4c3ade94b8aacca227d39add097d6c0148e87bfdc04c7a1fd89a1eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_faraday, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:15:24 compute-0 podman[98779]: 2026-01-29 09:15:24.159405624 +0000 UTC m=+0.021118284 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:15:24 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 29 09:15:24 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 29 09:15:24 compute-0 systemd[1]: Started libpod-conmon-bf63d72ae4c3ade94b8aacca227d39add097d6c0148e87bfdc04c7a1fd89a1eb.scope.
Jan 29 09:15:24 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:15:24 compute-0 podman[98779]: 2026-01-29 09:15:24.510674739 +0000 UTC m=+0.372387389 container init bf63d72ae4c3ade94b8aacca227d39add097d6c0148e87bfdc04c7a1fd89a1eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 29 09:15:24 compute-0 podman[98779]: 2026-01-29 09:15:24.516530291 +0000 UTC m=+0.378242931 container start bf63d72ae4c3ade94b8aacca227d39add097d6c0148e87bfdc04c7a1fd89a1eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 29 09:15:24 compute-0 flamboyant_faraday[98795]: 167 167
Jan 29 09:15:24 compute-0 systemd[1]: libpod-bf63d72ae4c3ade94b8aacca227d39add097d6c0148e87bfdc04c7a1fd89a1eb.scope: Deactivated successfully.
Jan 29 09:15:24 compute-0 conmon[98795]: conmon bf63d72ae4c3ade94b8a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bf63d72ae4c3ade94b8aacca227d39add097d6c0148e87bfdc04c7a1fd89a1eb.scope/container/memory.events
Jan 29 09:15:24 compute-0 podman[98779]: 2026-01-29 09:15:24.536709897 +0000 UTC m=+0.398422567 container attach bf63d72ae4c3ade94b8aacca227d39add097d6c0148e87bfdc04c7a1fd89a1eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_faraday, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 29 09:15:24 compute-0 podman[98779]: 2026-01-29 09:15:24.539193826 +0000 UTC m=+0.400906466 container died bf63d72ae4c3ade94b8aacca227d39add097d6c0148e87bfdc04c7a1fd89a1eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_faraday, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:15:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-db7d2f40bb1649f44cc51e569eb4eb5c50313eda30ed3718592be57f2cb73494-merged.mount: Deactivated successfully.
Jan 29 09:15:24 compute-0 podman[98779]: 2026-01-29 09:15:24.612029986 +0000 UTC m=+0.473742606 container remove bf63d72ae4c3ade94b8aacca227d39add097d6c0148e87bfdc04c7a1fd89a1eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Jan 29 09:15:24 compute-0 systemd[1]: libpod-conmon-bf63d72ae4c3ade94b8aacca227d39add097d6c0148e87bfdc04c7a1fd89a1eb.scope: Deactivated successfully.
Jan 29 09:15:24 compute-0 podman[98819]: 2026-01-29 09:15:24.766764477 +0000 UTC m=+0.058486106 container create 551022f74e6c05be216f820cd2f8d7f4d1a9ec01d68aa762106a032c15031b20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:15:24 compute-0 systemd[1]: Started libpod-conmon-551022f74e6c05be216f820cd2f8d7f4d1a9ec01d68aa762106a032c15031b20.scope.
Jan 29 09:15:24 compute-0 podman[98819]: 2026-01-29 09:15:24.733713034 +0000 UTC m=+0.025434693 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:15:24 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:15:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b67e69a0f626fd83d60ca041a181bd4bc1abc54333d18716f60befb7e10955af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:15:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b67e69a0f626fd83d60ca041a181bd4bc1abc54333d18716f60befb7e10955af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:15:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b67e69a0f626fd83d60ca041a181bd4bc1abc54333d18716f60befb7e10955af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:15:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b67e69a0f626fd83d60ca041a181bd4bc1abc54333d18716f60befb7e10955af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:15:24 compute-0 podman[98819]: 2026-01-29 09:15:24.857149571 +0000 UTC m=+0.148871220 container init 551022f74e6c05be216f820cd2f8d7f4d1a9ec01d68aa762106a032c15031b20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_keller, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 09:15:24 compute-0 podman[98819]: 2026-01-29 09:15:24.865624985 +0000 UTC m=+0.157346614 container start 551022f74e6c05be216f820cd2f8d7f4d1a9ec01d68aa762106a032c15031b20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_keller, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:15:24 compute-0 podman[98819]: 2026-01-29 09:15:24.870255543 +0000 UTC m=+0.161977202 container attach 551022f74e6c05be216f820cd2f8d7f4d1a9ec01d68aa762106a032c15031b20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 29 09:15:24 compute-0 ceph-mon[75183]: 6.6 scrub starts
Jan 29 09:15:24 compute-0 ceph-mon[75183]: 6.6 scrub ok
Jan 29 09:15:24 compute-0 ceph-mon[75183]: pgmap v141: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]: {
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:     "0": [
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:         {
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "devices": [
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "/dev/loop3"
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             ],
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "lv_name": "ceph_lv0",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "lv_size": "21470642176",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "name": "ceph_lv0",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "tags": {
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.cluster_name": "ceph",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.crush_device_class": "",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.encrypted": "0",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.objectstore": "bluestore",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.osd_id": "0",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.type": "block",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.vdo": "0",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.with_tpm": "0"
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             },
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "type": "block",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "vg_name": "ceph_vg0"
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:         }
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:     ],
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:     "1": [
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:         {
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "devices": [
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "/dev/loop4"
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             ],
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "lv_name": "ceph_lv1",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "lv_size": "21470642176",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "name": "ceph_lv1",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "tags": {
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.cluster_name": "ceph",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.crush_device_class": "",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.encrypted": "0",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.objectstore": "bluestore",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.osd_id": "1",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.type": "block",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.vdo": "0",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.with_tpm": "0"
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             },
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "type": "block",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "vg_name": "ceph_vg1"
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:         }
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:     ],
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:     "2": [
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:         {
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "devices": [
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "/dev/loop5"
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             ],
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "lv_name": "ceph_lv2",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "lv_size": "21470642176",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "name": "ceph_lv2",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "tags": {
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.cluster_name": "ceph",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.crush_device_class": "",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.encrypted": "0",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.objectstore": "bluestore",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.osd_id": "2",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.type": "block",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.vdo": "0",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:                 "ceph.with_tpm": "0"
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             },
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "type": "block",
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:             "vg_name": "ceph_vg2"
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:         }
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]:     ]
Jan 29 09:15:25 compute-0 xenodochial_keller[98835]: }
Jan 29 09:15:25 compute-0 systemd[1]: libpod-551022f74e6c05be216f820cd2f8d7f4d1a9ec01d68aa762106a032c15031b20.scope: Deactivated successfully.
Jan 29 09:15:25 compute-0 podman[98819]: 2026-01-29 09:15:25.175805325 +0000 UTC m=+0.467526954 container died 551022f74e6c05be216f820cd2f8d7f4d1a9ec01d68aa762106a032c15031b20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 29 09:15:25 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 29 09:15:25 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 29 09:15:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b67e69a0f626fd83d60ca041a181bd4bc1abc54333d18716f60befb7e10955af-merged.mount: Deactivated successfully.
Jan 29 09:15:25 compute-0 podman[98819]: 2026-01-29 09:15:25.532704455 +0000 UTC m=+0.824426084 container remove 551022f74e6c05be216f820cd2f8d7f4d1a9ec01d68aa762106a032c15031b20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_keller, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:15:25 compute-0 systemd[1]: libpod-conmon-551022f74e6c05be216f820cd2f8d7f4d1a9ec01d68aa762106a032c15031b20.scope: Deactivated successfully.
Jan 29 09:15:25 compute-0 sudo[98741]: pam_unix(sudo:session): session closed for user root
Jan 29 09:15:25 compute-0 sudo[98858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:15:25 compute-0 sudo[98858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:15:25 compute-0 sudo[98858]: pam_unix(sudo:session): session closed for user root
Jan 29 09:15:25 compute-0 sudo[98883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:15:25 compute-0 sudo[98883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:15:25 compute-0 ceph-mon[75183]: 4.4 scrub starts
Jan 29 09:15:25 compute-0 ceph-mon[75183]: 4.4 scrub ok
Jan 29 09:15:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v142: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:26 compute-0 podman[98920]: 2026-01-29 09:15:26.069070788 +0000 UTC m=+0.091968489 container create 4eced95dfc11b2feb14bc177f1ea7fea94b91d232c3163e9f60254e1d3cedb19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 29 09:15:26 compute-0 podman[98920]: 2026-01-29 09:15:26.002987505 +0000 UTC m=+0.025885246 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:15:26 compute-0 systemd[1]: Started libpod-conmon-4eced95dfc11b2feb14bc177f1ea7fea94b91d232c3163e9f60254e1d3cedb19.scope.
Jan 29 09:15:26 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:15:26 compute-0 podman[98920]: 2026-01-29 09:15:26.168752688 +0000 UTC m=+0.191650429 container init 4eced95dfc11b2feb14bc177f1ea7fea94b91d232c3163e9f60254e1d3cedb19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:15:26 compute-0 podman[98920]: 2026-01-29 09:15:26.174095596 +0000 UTC m=+0.196993307 container start 4eced95dfc11b2feb14bc177f1ea7fea94b91d232c3163e9f60254e1d3cedb19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_galois, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:15:26 compute-0 podman[98920]: 2026-01-29 09:15:26.178025494 +0000 UTC m=+0.200923205 container attach 4eced95dfc11b2feb14bc177f1ea7fea94b91d232c3163e9f60254e1d3cedb19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_galois, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 09:15:26 compute-0 affectionate_galois[98936]: 167 167
Jan 29 09:15:26 compute-0 systemd[1]: libpod-4eced95dfc11b2feb14bc177f1ea7fea94b91d232c3163e9f60254e1d3cedb19.scope: Deactivated successfully.
Jan 29 09:15:26 compute-0 podman[98920]: 2026-01-29 09:15:26.181178991 +0000 UTC m=+0.204076702 container died 4eced95dfc11b2feb14bc177f1ea7fea94b91d232c3163e9f60254e1d3cedb19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_galois, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 09:15:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-b72aa0c2b1887b8edbabe2ed42a297916f119122efa09a9359152928c16365eb-merged.mount: Deactivated successfully.
Jan 29 09:15:26 compute-0 podman[98920]: 2026-01-29 09:15:26.306537421 +0000 UTC m=+0.329435132 container remove 4eced95dfc11b2feb14bc177f1ea7fea94b91d232c3163e9f60254e1d3cedb19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 29 09:15:26 compute-0 systemd[1]: libpod-conmon-4eced95dfc11b2feb14bc177f1ea7fea94b91d232c3163e9f60254e1d3cedb19.scope: Deactivated successfully.
Jan 29 09:15:26 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 29 09:15:26 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 29 09:15:26 compute-0 podman[98960]: 2026-01-29 09:15:26.475442993 +0000 UTC m=+0.069628053 container create a5aa32bdd9a58d51e4d742e8b154d22a3895ed74a81fbd76e20b1e90bca4c862 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gould, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Jan 29 09:15:26 compute-0 systemd[1]: Started libpod-conmon-a5aa32bdd9a58d51e4d742e8b154d22a3895ed74a81fbd76e20b1e90bca4c862.scope.
Jan 29 09:15:26 compute-0 podman[98960]: 2026-01-29 09:15:26.429390102 +0000 UTC m=+0.023575202 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:15:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:15:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:15:26 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4d29f818ade3c06355306cbdd17be817ef1872b7854588cbaee31d2755aa227/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4d29f818ade3c06355306cbdd17be817ef1872b7854588cbaee31d2755aa227/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4d29f818ade3c06355306cbdd17be817ef1872b7854588cbaee31d2755aa227/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4d29f818ade3c06355306cbdd17be817ef1872b7854588cbaee31d2755aa227/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:15:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:15:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:15:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:15:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:15:26 compute-0 podman[98960]: 2026-01-29 09:15:26.564926982 +0000 UTC m=+0.159112072 container init a5aa32bdd9a58d51e4d742e8b154d22a3895ed74a81fbd76e20b1e90bca4c862 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gould, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:15:26 compute-0 podman[98960]: 2026-01-29 09:15:26.573115538 +0000 UTC m=+0.167300598 container start a5aa32bdd9a58d51e4d742e8b154d22a3895ed74a81fbd76e20b1e90bca4c862 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 29 09:15:26 compute-0 podman[98960]: 2026-01-29 09:15:26.59961855 +0000 UTC m=+0.193803640 container attach a5aa32bdd9a58d51e4d742e8b154d22a3895ed74a81fbd76e20b1e90bca4c862 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gould, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:15:26 compute-0 ceph-mon[75183]: 6.4 scrub starts
Jan 29 09:15:26 compute-0 ceph-mon[75183]: 6.4 scrub ok
Jan 29 09:15:26 compute-0 ceph-mon[75183]: pgmap v142: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:27 compute-0 lvm[99055]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:15:27 compute-0 lvm[99054]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:15:27 compute-0 lvm[99055]: VG ceph_vg1 finished
Jan 29 09:15:27 compute-0 lvm[99054]: VG ceph_vg0 finished
Jan 29 09:15:27 compute-0 lvm[99057]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:15:27 compute-0 lvm[99057]: VG ceph_vg2 finished
Jan 29 09:15:27 compute-0 interesting_gould[98976]: {}
Jan 29 09:15:27 compute-0 systemd[1]: libpod-a5aa32bdd9a58d51e4d742e8b154d22a3895ed74a81fbd76e20b1e90bca4c862.scope: Deactivated successfully.
Jan 29 09:15:27 compute-0 systemd[1]: libpod-a5aa32bdd9a58d51e4d742e8b154d22a3895ed74a81fbd76e20b1e90bca4c862.scope: Consumed 1.203s CPU time.
Jan 29 09:15:27 compute-0 podman[98960]: 2026-01-29 09:15:27.379176684 +0000 UTC m=+0.973361764 container died a5aa32bdd9a58d51e4d742e8b154d22a3895ed74a81fbd76e20b1e90bca4c862 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 29 09:15:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4d29f818ade3c06355306cbdd17be817ef1872b7854588cbaee31d2755aa227-merged.mount: Deactivated successfully.
Jan 29 09:15:27 compute-0 podman[98960]: 2026-01-29 09:15:27.436010283 +0000 UTC m=+1.030195343 container remove a5aa32bdd9a58d51e4d742e8b154d22a3895ed74a81fbd76e20b1e90bca4c862 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 29 09:15:27 compute-0 systemd[1]: libpod-conmon-a5aa32bdd9a58d51e4d742e8b154d22a3895ed74a81fbd76e20b1e90bca4c862.scope: Deactivated successfully.
Jan 29 09:15:27 compute-0 sudo[98883]: pam_unix(sudo:session): session closed for user root
Jan 29 09:15:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:15:27 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:15:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:15:27 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:15:27 compute-0 sudo[99072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:15:27 compute-0 sudo[99072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:15:27 compute-0 sudo[99072]: pam_unix(sudo:session): session closed for user root
Jan 29 09:15:27 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 29 09:15:27 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 29 09:15:27 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 29 09:15:27 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 29 09:15:27 compute-0 ceph-mon[75183]: 6.1 scrub starts
Jan 29 09:15:27 compute-0 ceph-mon[75183]: 6.1 scrub ok
Jan 29 09:15:27 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:15:27 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:15:27 compute-0 ceph-mon[75183]: 6.8 scrub starts
Jan 29 09:15:27 compute-0 ceph-mon[75183]: 6.8 scrub ok
Jan 29 09:15:27 compute-0 ceph-mon[75183]: 3.12 scrub starts
Jan 29 09:15:27 compute-0 ceph-mon[75183]: 3.12 scrub ok
Jan 29 09:15:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v143: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:15:28 compute-0 ceph-mon[75183]: pgmap v143: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:29 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Jan 29 09:15:29 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Jan 29 09:15:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v144: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:31 compute-0 ceph-mon[75183]: 6.15 scrub starts
Jan 29 09:15:31 compute-0 ceph-mon[75183]: 6.15 scrub ok
Jan 29 09:15:31 compute-0 ceph-mon[75183]: pgmap v144: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:31 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Jan 29 09:15:31 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Jan 29 09:15:31 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 29 09:15:31 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 29 09:15:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v145: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:32 compute-0 ceph-mon[75183]: 6.14 scrub starts
Jan 29 09:15:32 compute-0 ceph-mon[75183]: 6.14 scrub ok
Jan 29 09:15:32 compute-0 ceph-mon[75183]: pgmap v145: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:32 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 29 09:15:32 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 29 09:15:32 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 29 09:15:32 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 29 09:15:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:15:33 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 29 09:15:33 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 29 09:15:33 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 29 09:15:33 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 29 09:15:33 compute-0 ceph-mon[75183]: 2.11 scrub starts
Jan 29 09:15:33 compute-0 ceph-mon[75183]: 2.11 scrub ok
Jan 29 09:15:33 compute-0 ceph-mon[75183]: 4.13 scrub starts
Jan 29 09:15:33 compute-0 ceph-mon[75183]: 4.13 scrub ok
Jan 29 09:15:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v146: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:34 compute-0 ceph-mon[75183]: 5.14 scrub starts
Jan 29 09:15:34 compute-0 ceph-mon[75183]: 5.14 scrub ok
Jan 29 09:15:34 compute-0 ceph-mon[75183]: 4.7 scrub starts
Jan 29 09:15:34 compute-0 ceph-mon[75183]: 4.7 scrub ok
Jan 29 09:15:34 compute-0 ceph-mon[75183]: 4.11 scrub starts
Jan 29 09:15:34 compute-0 ceph-mon[75183]: 4.11 scrub ok
Jan 29 09:15:34 compute-0 ceph-mon[75183]: pgmap v146: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:35 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 29 09:15:35 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 29 09:15:35 compute-0 ceph-mon[75183]: 4.1c scrub starts
Jan 29 09:15:35 compute-0 ceph-mon[75183]: 4.1c scrub ok
Jan 29 09:15:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v147: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:36 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 29 09:15:36 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 29 09:15:37 compute-0 ceph-mon[75183]: pgmap v147: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:37 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 29 09:15:37 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 29 09:15:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v148: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:38 compute-0 ceph-mon[75183]: 2.13 scrub starts
Jan 29 09:15:38 compute-0 ceph-mon[75183]: 2.13 scrub ok
Jan 29 09:15:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:15:39 compute-0 ceph-mon[75183]: 5.15 scrub starts
Jan 29 09:15:39 compute-0 ceph-mon[75183]: 5.15 scrub ok
Jan 29 09:15:39 compute-0 ceph-mon[75183]: pgmap v148: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:39 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 29 09:15:39 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 29 09:15:39 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 29 09:15:39 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 29 09:15:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v149: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:40 compute-0 ceph-mon[75183]: 4.5 scrub starts
Jan 29 09:15:40 compute-0 ceph-mon[75183]: 4.5 scrub ok
Jan 29 09:15:40 compute-0 ceph-mon[75183]: pgmap v149: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Jan 29 09:15:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Jan 29 09:15:41 compute-0 ceph-mon[75183]: 3.15 scrub starts
Jan 29 09:15:41 compute-0 ceph-mon[75183]: 3.15 scrub ok
Jan 29 09:15:41 compute-0 ceph-mon[75183]: 6.11 scrub starts
Jan 29 09:15:41 compute-0 ceph-mon[75183]: 6.11 scrub ok
Jan 29 09:15:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Jan 29 09:15:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Jan 29 09:15:41 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 29 09:15:41 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 29 09:15:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v150: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:42 compute-0 ceph-mon[75183]: 6.13 scrub starts
Jan 29 09:15:42 compute-0 ceph-mon[75183]: 6.13 scrub ok
Jan 29 09:15:42 compute-0 ceph-mon[75183]: 7.1b scrub starts
Jan 29 09:15:42 compute-0 ceph-mon[75183]: pgmap v150: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:42 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Jan 29 09:15:42 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Jan 29 09:15:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:15:43 compute-0 ceph-mon[75183]: 7.1b scrub ok
Jan 29 09:15:43 compute-0 ceph-mon[75183]: 6.1f scrub starts
Jan 29 09:15:43 compute-0 ceph-mon[75183]: 6.1f scrub ok
Jan 29 09:15:43 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 29 09:15:43 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 29 09:15:43 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 29 09:15:43 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 29 09:15:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v151: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:44 compute-0 ceph-mon[75183]: 6.e scrub starts
Jan 29 09:15:44 compute-0 ceph-mon[75183]: 6.e scrub ok
Jan 29 09:15:44 compute-0 ceph-mon[75183]: 7.13 scrub starts
Jan 29 09:15:44 compute-0 ceph-mon[75183]: pgmap v151: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:45 compute-0 ceph-mon[75183]: 7.13 scrub ok
Jan 29 09:15:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v152: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:46 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 29 09:15:46 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 29 09:15:46 compute-0 ceph-mon[75183]: pgmap v152: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:47 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 29 09:15:47 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 29 09:15:47 compute-0 ceph-mon[75183]: 4.f scrub starts
Jan 29 09:15:47 compute-0 ceph-mon[75183]: 4.f scrub ok
Jan 29 09:15:47 compute-0 ceph-mon[75183]: 3.18 scrub starts
Jan 29 09:15:47 compute-0 ceph-mon[75183]: 3.18 scrub ok
Jan 29 09:15:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v153: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:15:48 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 29 09:15:48 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 29 09:15:48 compute-0 ceph-mon[75183]: pgmap v153: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:49 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 29 09:15:49 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 29 09:15:49 compute-0 ceph-mon[75183]: 6.b scrub starts
Jan 29 09:15:49 compute-0 ceph-mon[75183]: 6.b scrub ok
Jan 29 09:15:49 compute-0 ceph-mon[75183]: 3.16 scrub starts
Jan 29 09:15:49 compute-0 ceph-mon[75183]: 3.16 scrub ok
Jan 29 09:15:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v154: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:50 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 29 09:15:50 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 29 09:15:50 compute-0 ceph-mon[75183]: pgmap v154: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:50 compute-0 ceph-mon[75183]: 7.11 scrub starts
Jan 29 09:15:50 compute-0 ceph-mon[75183]: 7.11 scrub ok
Jan 29 09:15:51 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 29 09:15:51 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 29 09:15:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v155: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:53 compute-0 ceph-mon[75183]: 4.2 scrub starts
Jan 29 09:15:53 compute-0 ceph-mon[75183]: 4.2 scrub ok
Jan 29 09:15:53 compute-0 ceph-mon[75183]: pgmap v155: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:15:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v156: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:54 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 29 09:15:54 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 29 09:15:54 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 29 09:15:54 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 29 09:15:55 compute-0 ceph-mon[75183]: pgmap v156: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:55 compute-0 ceph-mon[75183]: 4.1b scrub starts
Jan 29 09:15:55 compute-0 ceph-mon[75183]: 4.1b scrub ok
Jan 29 09:15:55 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 29 09:15:55 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 29 09:15:55 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 29 09:15:55 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 29 09:15:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:15:55
Jan 29 09:15:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:15:55 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:15:55 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'images', 'backups']
Jan 29 09:15:55 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:15:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v157: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:56 compute-0 ceph-mon[75183]: 4.9 scrub starts
Jan 29 09:15:56 compute-0 ceph-mon[75183]: 4.9 scrub ok
Jan 29 09:15:56 compute-0 ceph-mon[75183]: 7.15 scrub starts
Jan 29 09:15:56 compute-0 ceph-mon[75183]: 7.15 scrub ok
Jan 29 09:15:56 compute-0 ceph-mon[75183]: 2.16 scrub starts
Jan 29 09:15:56 compute-0 ceph-mon[75183]: 2.16 scrub ok
Jan 29 09:15:56 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 29 09:15:56 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 29 09:15:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:15:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:15:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:15:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:15:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:15:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:15:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:15:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:15:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:15:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:15:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:15:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:15:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:15:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:15:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:15:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:15:56 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 29 09:15:56 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 29 09:15:57 compute-0 ceph-mon[75183]: pgmap v157: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:57 compute-0 ceph-mon[75183]: 3.11 scrub starts
Jan 29 09:15:57 compute-0 ceph-mon[75183]: 3.11 scrub ok
Jan 29 09:15:57 compute-0 ceph-mon[75183]: 3.9 scrub starts
Jan 29 09:15:57 compute-0 ceph-mon[75183]: 3.9 scrub ok
Jan 29 09:15:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v158: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:15:58 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 29 09:15:58 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 29 09:15:59 compute-0 ceph-mon[75183]: pgmap v158: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:15:59 compute-0 ceph-mon[75183]: 2.8 scrub starts
Jan 29 09:15:59 compute-0 ceph-mon[75183]: 2.8 scrub ok
Jan 29 09:16:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v159: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:00 compute-0 ceph-mon[75183]: pgmap v159: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:00 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 29 09:16:00 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 29 09:16:01 compute-0 ceph-mon[75183]: 3.a scrub starts
Jan 29 09:16:01 compute-0 ceph-mon[75183]: 3.a scrub ok
Jan 29 09:16:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:16:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:16:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:16:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:16:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:16:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:16:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:16:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:16:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:16:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:16:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:16:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:16:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0578630957479565e-06 of space, bias 4.0, pg target 0.0012694357148975478 quantized to 16 (current 32)
Jan 29 09:16:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:16:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:16:01 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 29 09:16:01 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 29 09:16:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v160: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:02 compute-0 ceph-mon[75183]: 2.b scrub starts
Jan 29 09:16:02 compute-0 ceph-mon[75183]: 2.b scrub ok
Jan 29 09:16:02 compute-0 ceph-mon[75183]: pgmap v160: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:02 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 29 09:16:02 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 29 09:16:02 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 29 09:16:02 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 29 09:16:03 compute-0 ceph-mon[75183]: 7.a scrub starts
Jan 29 09:16:03 compute-0 ceph-mon[75183]: 7.a scrub ok
Jan 29 09:16:03 compute-0 ceph-mon[75183]: 4.8 scrub starts
Jan 29 09:16:03 compute-0 ceph-mon[75183]: 4.8 scrub ok
Jan 29 09:16:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:16:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v161: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:04 compute-0 ceph-mon[75183]: pgmap v161: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:04 compute-0 sudo[98409]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:05 compute-0 sudo[99322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozdqeimuyuwslzqrpgxykyalvnodxqqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678164.9668086-132-253764381040312/AnsiballZ_command.py'
Jan 29 09:16:05 compute-0 sudo[99322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:05 compute-0 python3.9[99324]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:16:05 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 29 09:16:05 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 29 09:16:05 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Jan 29 09:16:05 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Jan 29 09:16:05 compute-0 ceph-mon[75183]: 7.f scrub starts
Jan 29 09:16:05 compute-0 ceph-mon[75183]: 7.f scrub ok
Jan 29 09:16:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v162: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:06 compute-0 sudo[99322]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:06 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 29 09:16:06 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 29 09:16:06 compute-0 ceph-mon[75183]: 6.17 scrub starts
Jan 29 09:16:06 compute-0 ceph-mon[75183]: 6.17 scrub ok
Jan 29 09:16:06 compute-0 ceph-mon[75183]: pgmap v162: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:06 compute-0 ceph-mon[75183]: 5.3 scrub starts
Jan 29 09:16:06 compute-0 ceph-mon[75183]: 5.3 scrub ok
Jan 29 09:16:06 compute-0 sudo[99609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfliulhagcgubxqknpbmroqfqntyrhhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678166.2149851-140-208811206798395/AnsiballZ_selinux.py'
Jan 29 09:16:06 compute-0 sudo[99609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:07 compute-0 python3.9[99611]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 29 09:16:07 compute-0 sudo[99609]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:07 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 29 09:16:07 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 29 09:16:07 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 29 09:16:07 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 29 09:16:07 compute-0 sudo[99761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmlugaguxirkhslqijcafqamcmwgbiyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678167.4644692-151-176670959511688/AnsiballZ_command.py'
Jan 29 09:16:07 compute-0 sudo[99761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:07 compute-0 ceph-mon[75183]: 7.1c scrub starts
Jan 29 09:16:07 compute-0 ceph-mon[75183]: 7.1c scrub ok
Jan 29 09:16:07 compute-0 ceph-mon[75183]: 3.6 scrub starts
Jan 29 09:16:07 compute-0 ceph-mon[75183]: 3.6 scrub ok
Jan 29 09:16:07 compute-0 python3.9[99763]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 29 09:16:07 compute-0 sudo[99761]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v163: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:16:08 compute-0 sudo[99913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slfwwjmrcepocrxsccidknnbocvgyrqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678168.1003177-159-62147374688796/AnsiballZ_file.py'
Jan 29 09:16:08 compute-0 sudo[99913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:08 compute-0 python3.9[99915]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:16:08 compute-0 sudo[99913]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:08 compute-0 ceph-mon[75183]: pgmap v163: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:09 compute-0 sudo[100065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvqyfopjunxfvbkxxdgdwkcouqvearqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678168.716664-167-114293252148960/AnsiballZ_mount.py'
Jan 29 09:16:09 compute-0 sudo[100065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:09 compute-0 python3.9[100067]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 29 09:16:09 compute-0 sudo[100065]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:09 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 29 09:16:09 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 29 09:16:09 compute-0 ceph-mon[75183]: 3.e scrub starts
Jan 29 09:16:09 compute-0 ceph-mon[75183]: 3.e scrub ok
Jan 29 09:16:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v164: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:10 compute-0 sudo[100217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtfrrxyumkemblgopvhucmmqcuwtmjld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678170.1362877-195-62928237716021/AnsiballZ_file.py'
Jan 29 09:16:10 compute-0 sudo[100217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:10 compute-0 python3.9[100219]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:16:10 compute-0 sudo[100217]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:10 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 29 09:16:10 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 29 09:16:10 compute-0 ceph-mon[75183]: pgmap v164: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:10 compute-0 sudo[100369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynddrbotpdgqfijrcrnbrgfokqxdilgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678170.7289426-203-98272030993211/AnsiballZ_stat.py'
Jan 29 09:16:10 compute-0 sudo[100369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:11 compute-0 python3.9[100371]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:16:11 compute-0 sudo[100369]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:11 compute-0 sudo[100447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxdslpdxidomqlyadpyadtbsapnmxymk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678170.7289426-203-98272030993211/AnsiballZ_file.py'
Jan 29 09:16:11 compute-0 sudo[100447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:11 compute-0 python3.9[100449]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:16:11 compute-0 sudo[100447]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:11 compute-0 ceph-mon[75183]: 4.12 scrub starts
Jan 29 09:16:11 compute-0 ceph-mon[75183]: 4.12 scrub ok
Jan 29 09:16:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v165: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:12 compute-0 sudo[100599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azrqksmivdtxgymprguvofytneovaatm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678172.079346-224-148284451766719/AnsiballZ_stat.py'
Jan 29 09:16:12 compute-0 sudo[100599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:12 compute-0 python3.9[100601]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:16:12 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 29 09:16:12 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 29 09:16:12 compute-0 sudo[100599]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:12 compute-0 ceph-mon[75183]: pgmap v165: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:12 compute-0 ceph-mon[75183]: 7.8 scrub starts
Jan 29 09:16:12 compute-0 ceph-mon[75183]: 7.8 scrub ok
Jan 29 09:16:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:16:13 compute-0 sudo[100753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elamrkjqcyghazguqzrlnkhctmangsoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678172.9809802-237-228775526957196/AnsiballZ_getent.py'
Jan 29 09:16:13 compute-0 sudo[100753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:13 compute-0 python3.9[100755]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 29 09:16:13 compute-0 sudo[100753]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:14 compute-0 sudo[100906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxofiuzzphzzgljgkdzgnoyeqklrrcli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678173.798967-247-123454090029437/AnsiballZ_getent.py'
Jan 29 09:16:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v166: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:14 compute-0 sudo[100906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:14 compute-0 python3.9[100908]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 29 09:16:14 compute-0 sudo[100906]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:14 compute-0 sudo[101059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beuklrbxrgkltasqklxgddxmelaxdzdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678174.3906243-255-110260744177604/AnsiballZ_group.py'
Jan 29 09:16:14 compute-0 sudo[101059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:15 compute-0 python3.9[101061]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 29 09:16:15 compute-0 sudo[101059]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:15 compute-0 ceph-mon[75183]: pgmap v166: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:15 compute-0 sudo[101211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unzedqnpssquicotymouscgezglqsvrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678175.254928-264-117115345529603/AnsiballZ_file.py'
Jan 29 09:16:15 compute-0 sudo[101211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:15 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 29 09:16:15 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 29 09:16:15 compute-0 python3.9[101213]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 29 09:16:15 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 29 09:16:15 compute-0 sudo[101211]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:15 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 29 09:16:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v167: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:16 compute-0 ceph-mon[75183]: 5.5 scrub starts
Jan 29 09:16:16 compute-0 ceph-mon[75183]: 5.5 scrub ok
Jan 29 09:16:16 compute-0 sudo[101363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjykvfwxsxgqkidfrcinatllpwvfrfks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678175.98725-275-45044214465632/AnsiballZ_dnf.py'
Jan 29 09:16:16 compute-0 sudo[101363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:16 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 29 09:16:16 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 29 09:16:16 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 29 09:16:16 compute-0 python3.9[101365]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:16:16 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 29 09:16:17 compute-0 ceph-mon[75183]: 6.2 scrub starts
Jan 29 09:16:17 compute-0 ceph-mon[75183]: 6.2 scrub ok
Jan 29 09:16:17 compute-0 ceph-mon[75183]: pgmap v167: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:17 compute-0 ceph-mon[75183]: 7.2 scrub starts
Jan 29 09:16:17 compute-0 ceph-mon[75183]: 7.2 scrub ok
Jan 29 09:16:17 compute-0 ceph-mon[75183]: 5.2 scrub starts
Jan 29 09:16:17 compute-0 ceph-mon[75183]: 5.2 scrub ok
Jan 29 09:16:17 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 29 09:16:17 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 29 09:16:17 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 29 09:16:17 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 29 09:16:17 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 29 09:16:17 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 29 09:16:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v168: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:18 compute-0 sudo[101363]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:18 compute-0 ceph-mon[75183]: 7.1 scrub starts
Jan 29 09:16:18 compute-0 ceph-mon[75183]: 7.1 scrub ok
Jan 29 09:16:18 compute-0 ceph-mon[75183]: 2.1f scrub starts
Jan 29 09:16:18 compute-0 ceph-mon[75183]: 2.1f scrub ok
Jan 29 09:16:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:16:18 compute-0 sudo[101516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtegprpsurbwalrhnfjgeecgithcbkbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678178.1636782-283-63718308575319/AnsiballZ_file.py'
Jan 29 09:16:18 compute-0 sudo[101516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:18 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Jan 29 09:16:18 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Jan 29 09:16:18 compute-0 python3.9[101518]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:16:18 compute-0 sudo[101516]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:19 compute-0 ceph-mon[75183]: 4.14 scrub starts
Jan 29 09:16:19 compute-0 ceph-mon[75183]: 4.14 scrub ok
Jan 29 09:16:19 compute-0 ceph-mon[75183]: pgmap v168: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:19 compute-0 sudo[101668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxcihskzazyohpmudnxooalfwqhbbvmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678179.017666-291-92636162661872/AnsiballZ_stat.py'
Jan 29 09:16:19 compute-0 sudo[101668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:19 compute-0 python3.9[101670]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:16:19 compute-0 sudo[101668]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:19 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 29 09:16:19 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 29 09:16:19 compute-0 sudo[101746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkmvhusgbqqivdtykfascpvcfmwnmccx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678179.017666-291-92636162661872/AnsiballZ_file.py'
Jan 29 09:16:19 compute-0 sudo[101746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:19 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 29 09:16:19 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 29 09:16:19 compute-0 python3.9[101748]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:16:19 compute-0 sudo[101746]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v169: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:20 compute-0 ceph-mon[75183]: 6.1c scrub starts
Jan 29 09:16:20 compute-0 ceph-mon[75183]: 6.1c scrub ok
Jan 29 09:16:20 compute-0 ceph-mon[75183]: 3.3 scrub starts
Jan 29 09:16:20 compute-0 ceph-mon[75183]: 3.3 scrub ok
Jan 29 09:16:20 compute-0 sudo[101898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxylsyxemlnksiiohhahjolmwrhsfcvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678180.0657473-304-10609006105694/AnsiballZ_stat.py'
Jan 29 09:16:20 compute-0 sudo[101898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:20 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 29 09:16:20 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 29 09:16:20 compute-0 python3.9[101900]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:16:20 compute-0 sudo[101898]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:20 compute-0 sudo[101976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xamzfkwxfhmipnaecjdtbpmfddcyjsbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678180.0657473-304-10609006105694/AnsiballZ_file.py'
Jan 29 09:16:20 compute-0 sudo[101976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:20 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 29 09:16:20 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 29 09:16:20 compute-0 python3.9[101978]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:16:20 compute-0 sudo[101976]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:21 compute-0 ceph-mon[75183]: 2.1b scrub starts
Jan 29 09:16:21 compute-0 ceph-mon[75183]: 2.1b scrub ok
Jan 29 09:16:21 compute-0 ceph-mon[75183]: pgmap v169: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:21 compute-0 ceph-mon[75183]: 2.2 scrub starts
Jan 29 09:16:21 compute-0 ceph-mon[75183]: 2.2 scrub ok
Jan 29 09:16:21 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 29 09:16:21 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 29 09:16:21 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 29 09:16:21 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 29 09:16:21 compute-0 sudo[102128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecpsztlgruszoibtvncyennafwjmswfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678181.32377-319-61969068833341/AnsiballZ_dnf.py'
Jan 29 09:16:21 compute-0 sudo[102128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:21 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 29 09:16:21 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 29 09:16:21 compute-0 python3.9[102130]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:16:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v170: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:22 compute-0 ceph-mon[75183]: 5.13 scrub starts
Jan 29 09:16:22 compute-0 ceph-mon[75183]: 5.13 scrub ok
Jan 29 09:16:22 compute-0 ceph-mon[75183]: 3.7 scrub starts
Jan 29 09:16:22 compute-0 ceph-mon[75183]: 3.7 scrub ok
Jan 29 09:16:22 compute-0 ceph-mon[75183]: 2.f scrub starts
Jan 29 09:16:22 compute-0 ceph-mon[75183]: 2.f scrub ok
Jan 29 09:16:22 compute-0 ceph-mon[75183]: pgmap v170: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:22 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 29 09:16:22 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 29 09:16:22 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 29 09:16:22 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 29 09:16:23 compute-0 ceph-mon[75183]: 5.12 scrub starts
Jan 29 09:16:23 compute-0 ceph-mon[75183]: 5.12 scrub ok
Jan 29 09:16:23 compute-0 ceph-mon[75183]: 7.6 scrub starts
Jan 29 09:16:23 compute-0 ceph-mon[75183]: 7.6 scrub ok
Jan 29 09:16:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:16:23 compute-0 sudo[102128]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:23 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 29 09:16:23 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 29 09:16:23 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 29 09:16:23 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 29 09:16:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v171: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:24 compute-0 python3.9[102281]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:16:24 compute-0 ceph-mon[75183]: 2.15 scrub starts
Jan 29 09:16:24 compute-0 ceph-mon[75183]: 2.15 scrub ok
Jan 29 09:16:24 compute-0 ceph-mon[75183]: 7.5 scrub starts
Jan 29 09:16:24 compute-0 ceph-mon[75183]: 7.5 scrub ok
Jan 29 09:16:24 compute-0 ceph-mon[75183]: pgmap v171: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:24 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 29 09:16:24 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 29 09:16:24 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Jan 29 09:16:24 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Jan 29 09:16:24 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 29 09:16:24 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 29 09:16:24 compute-0 python3.9[102433]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 29 09:16:25 compute-0 ceph-mon[75183]: 5.9 scrub starts
Jan 29 09:16:25 compute-0 ceph-mon[75183]: 5.9 scrub ok
Jan 29 09:16:25 compute-0 ceph-mon[75183]: 7.c scrub starts
Jan 29 09:16:25 compute-0 ceph-mon[75183]: 7.c scrub ok
Jan 29 09:16:25 compute-0 ceph-mon[75183]: 2.1c scrub starts
Jan 29 09:16:25 compute-0 ceph-mon[75183]: 2.1c scrub ok
Jan 29 09:16:25 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 29 09:16:25 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 29 09:16:25 compute-0 python3.9[102584]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:16:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v172: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:26 compute-0 ceph-mon[75183]: 5.11 scrub starts
Jan 29 09:16:26 compute-0 ceph-mon[75183]: 5.11 scrub ok
Jan 29 09:16:26 compute-0 ceph-mon[75183]: 7.1a scrub starts
Jan 29 09:16:26 compute-0 ceph-mon[75183]: 7.1a scrub ok
Jan 29 09:16:26 compute-0 ceph-mon[75183]: pgmap v172: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:26 compute-0 sudo[102734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmpcspunahxcicoeapeyvxifucaibbuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678185.9758976-360-121175261993738/AnsiballZ_systemd.py'
Jan 29 09:16:26 compute-0 sudo[102734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:16:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:16:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:16:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:16:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:16:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:16:26 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 29 09:16:26 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 29 09:16:26 compute-0 python3.9[102736]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:16:26 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 29 09:16:26 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 29 09:16:26 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 29 09:16:27 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 29 09:16:27 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 29 09:16:27 compute-0 sudo[102734]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:27 compute-0 ceph-mon[75183]: 2.a scrub starts
Jan 29 09:16:27 compute-0 ceph-mon[75183]: 2.a scrub ok
Jan 29 09:16:27 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 29 09:16:27 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 29 09:16:27 compute-0 sudo[102874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:16:27 compute-0 sudo[102874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:16:27 compute-0 sudo[102874]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:27 compute-0 sudo[102923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:16:27 compute-0 sudo[102923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:16:27 compute-0 python3.9[102920]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 29 09:16:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v173: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:28 compute-0 sudo[102923]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:16:28 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:16:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:16:28 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:16:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:16:28 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:16:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:16:28 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:16:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:16:28 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:16:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:16:28 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:16:28 compute-0 sudo[103002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:16:28 compute-0 sudo[103002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:16:28 compute-0 sudo[103002]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:28 compute-0 sudo[103027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:16:28 compute-0 sudo[103027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:16:28 compute-0 ceph-mon[75183]: 7.18 scrub starts
Jan 29 09:16:28 compute-0 ceph-mon[75183]: 7.18 scrub ok
Jan 29 09:16:28 compute-0 ceph-mon[75183]: pgmap v173: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:28 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:16:28 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:16:28 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:16:28 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:16:28 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:16:28 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:16:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:16:28 compute-0 podman[103065]: 2026-01-29 09:16:28.556547987 +0000 UTC m=+0.040176719 container create e3c24c2a2df385ac1c83ebfa6a0e18456363ee87aff11d62545c60053dbf06ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:16:28 compute-0 systemd[1]: Started libpod-conmon-e3c24c2a2df385ac1c83ebfa6a0e18456363ee87aff11d62545c60053dbf06ee.scope.
Jan 29 09:16:28 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:16:28 compute-0 podman[103065]: 2026-01-29 09:16:28.540636824 +0000 UTC m=+0.024265576 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:16:28 compute-0 podman[103065]: 2026-01-29 09:16:28.64730791 +0000 UTC m=+0.130936672 container init e3c24c2a2df385ac1c83ebfa6a0e18456363ee87aff11d62545c60053dbf06ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:16:28 compute-0 podman[103065]: 2026-01-29 09:16:28.653649209 +0000 UTC m=+0.137277941 container start e3c24c2a2df385ac1c83ebfa6a0e18456363ee87aff11d62545c60053dbf06ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_driscoll, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:16:28 compute-0 podman[103065]: 2026-01-29 09:16:28.657376308 +0000 UTC m=+0.141005070 container attach e3c24c2a2df385ac1c83ebfa6a0e18456363ee87aff11d62545c60053dbf06ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_driscoll, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 29 09:16:28 compute-0 suspicious_driscoll[103081]: 167 167
Jan 29 09:16:28 compute-0 systemd[1]: libpod-e3c24c2a2df385ac1c83ebfa6a0e18456363ee87aff11d62545c60053dbf06ee.scope: Deactivated successfully.
Jan 29 09:16:28 compute-0 podman[103065]: 2026-01-29 09:16:28.661673722 +0000 UTC m=+0.145302454 container died e3c24c2a2df385ac1c83ebfa6a0e18456363ee87aff11d62545c60053dbf06ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 29 09:16:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-0598857b0bbc379f3ae46e374c52ec259bad88dfa27468095103ccfb52fda72a-merged.mount: Deactivated successfully.
Jan 29 09:16:28 compute-0 podman[103065]: 2026-01-29 09:16:28.695385738 +0000 UTC m=+0.179014470 container remove e3c24c2a2df385ac1c83ebfa6a0e18456363ee87aff11d62545c60053dbf06ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_driscoll, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Jan 29 09:16:28 compute-0 systemd[1]: libpod-conmon-e3c24c2a2df385ac1c83ebfa6a0e18456363ee87aff11d62545c60053dbf06ee.scope: Deactivated successfully.
Jan 29 09:16:28 compute-0 podman[103106]: 2026-01-29 09:16:28.814836513 +0000 UTC m=+0.040830776 container create 8a8a6f31e63dafc01ec7b69cf7e3ad1af280725a84a4de57119a68160ea275e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wiles, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 09:16:28 compute-0 systemd[1]: Started libpod-conmon-8a8a6f31e63dafc01ec7b69cf7e3ad1af280725a84a4de57119a68160ea275e9.scope.
Jan 29 09:16:28 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9698cfd01cb359fc63fb70983d344243069b6664bdee084c077df3aeb69ed40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9698cfd01cb359fc63fb70983d344243069b6664bdee084c077df3aeb69ed40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9698cfd01cb359fc63fb70983d344243069b6664bdee084c077df3aeb69ed40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9698cfd01cb359fc63fb70983d344243069b6664bdee084c077df3aeb69ed40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9698cfd01cb359fc63fb70983d344243069b6664bdee084c077df3aeb69ed40/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:16:28 compute-0 podman[103106]: 2026-01-29 09:16:28.797178514 +0000 UTC m=+0.023172807 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:16:28 compute-0 podman[103106]: 2026-01-29 09:16:28.895638421 +0000 UTC m=+0.121632704 container init 8a8a6f31e63dafc01ec7b69cf7e3ad1af280725a84a4de57119a68160ea275e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wiles, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:16:28 compute-0 podman[103106]: 2026-01-29 09:16:28.902078483 +0000 UTC m=+0.128072746 container start 8a8a6f31e63dafc01ec7b69cf7e3ad1af280725a84a4de57119a68160ea275e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wiles, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:16:28 compute-0 podman[103106]: 2026-01-29 09:16:28.905510684 +0000 UTC m=+0.131504937 container attach 8a8a6f31e63dafc01ec7b69cf7e3ad1af280725a84a4de57119a68160ea275e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wiles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 09:16:29 compute-0 fervent_wiles[103122]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:16:29 compute-0 fervent_wiles[103122]: --> All data devices are unavailable
Jan 29 09:16:29 compute-0 systemd[1]: libpod-8a8a6f31e63dafc01ec7b69cf7e3ad1af280725a84a4de57119a68160ea275e9.scope: Deactivated successfully.
Jan 29 09:16:29 compute-0 podman[103106]: 2026-01-29 09:16:29.33799347 +0000 UTC m=+0.563987733 container died 8a8a6f31e63dafc01ec7b69cf7e3ad1af280725a84a4de57119a68160ea275e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wiles, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 29 09:16:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9698cfd01cb359fc63fb70983d344243069b6664bdee084c077df3aeb69ed40-merged.mount: Deactivated successfully.
Jan 29 09:16:29 compute-0 podman[103106]: 2026-01-29 09:16:29.38463302 +0000 UTC m=+0.610627293 container remove 8a8a6f31e63dafc01ec7b69cf7e3ad1af280725a84a4de57119a68160ea275e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 29 09:16:29 compute-0 systemd[1]: libpod-conmon-8a8a6f31e63dafc01ec7b69cf7e3ad1af280725a84a4de57119a68160ea275e9.scope: Deactivated successfully.
Jan 29 09:16:29 compute-0 sudo[103027]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:29 compute-0 sudo[103222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:16:29 compute-0 sudo[103222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:16:29 compute-0 sudo[103222]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:29 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 29 09:16:29 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 29 09:16:29 compute-0 sudo[103254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:16:29 compute-0 sudo[103254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:16:29 compute-0 sudo[103328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nppslisceddknebvjteeexmksmdkfvsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678189.3255653-417-106618173229232/AnsiballZ_systemd.py'
Jan 29 09:16:29 compute-0 sudo[103328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:29 compute-0 ceph-mon[75183]: 7.e scrub starts
Jan 29 09:16:29 compute-0 ceph-mon[75183]: 7.e scrub ok
Jan 29 09:16:29 compute-0 podman[103342]: 2026-01-29 09:16:29.794154177 +0000 UTC m=+0.040219051 container create 877acffefd34de57c6d41a69e13ba730eb96f6e6d0a5eecb71596f6f743e3954 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 29 09:16:29 compute-0 systemd[1]: Started libpod-conmon-877acffefd34de57c6d41a69e13ba730eb96f6e6d0a5eecb71596f6f743e3954.scope.
Jan 29 09:16:29 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:16:29 compute-0 podman[103342]: 2026-01-29 09:16:29.864567548 +0000 UTC m=+0.110632452 container init 877acffefd34de57c6d41a69e13ba730eb96f6e6d0a5eecb71596f6f743e3954 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 29 09:16:29 compute-0 podman[103342]: 2026-01-29 09:16:29.870055964 +0000 UTC m=+0.116120828 container start 877acffefd34de57c6d41a69e13ba730eb96f6e6d0a5eecb71596f6f743e3954 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:16:29 compute-0 podman[103342]: 2026-01-29 09:16:29.775447849 +0000 UTC m=+0.021512743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:16:29 compute-0 podman[103342]: 2026-01-29 09:16:29.873096815 +0000 UTC m=+0.119161759 container attach 877acffefd34de57c6d41a69e13ba730eb96f6e6d0a5eecb71596f6f743e3954 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:16:29 compute-0 systemd[1]: libpod-877acffefd34de57c6d41a69e13ba730eb96f6e6d0a5eecb71596f6f743e3954.scope: Deactivated successfully.
Jan 29 09:16:29 compute-0 pensive_neumann[103358]: 167 167
Jan 29 09:16:29 compute-0 conmon[103358]: conmon 877acffefd34de57c6d4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-877acffefd34de57c6d41a69e13ba730eb96f6e6d0a5eecb71596f6f743e3954.scope/container/memory.events
Jan 29 09:16:29 compute-0 podman[103342]: 2026-01-29 09:16:29.878956581 +0000 UTC m=+0.125021455 container died 877acffefd34de57c6d41a69e13ba730eb96f6e6d0a5eecb71596f6f743e3954 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_neumann, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 09:16:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f74acc103d47959c4b93e93fdfa8a16ec9e14a4b16beae7f7174818d46fd2e9d-merged.mount: Deactivated successfully.
Jan 29 09:16:29 compute-0 python3.9[103330]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:16:29 compute-0 podman[103342]: 2026-01-29 09:16:29.924614455 +0000 UTC m=+0.170679329 container remove 877acffefd34de57c6d41a69e13ba730eb96f6e6d0a5eecb71596f6f743e3954 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_neumann, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 29 09:16:29 compute-0 systemd[1]: libpod-conmon-877acffefd34de57c6d41a69e13ba730eb96f6e6d0a5eecb71596f6f743e3954.scope: Deactivated successfully.
Jan 29 09:16:29 compute-0 sudo[103328]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v174: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:30 compute-0 podman[103385]: 2026-01-29 09:16:30.055404781 +0000 UTC m=+0.041713240 container create 9e67529d77fffa19c2e82333367149e8b0724143ece7bbfafdde15d2d94284e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_babbage, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 09:16:30 compute-0 systemd[1]: Started libpod-conmon-9e67529d77fffa19c2e82333367149e8b0724143ece7bbfafdde15d2d94284e5.scope.
Jan 29 09:16:30 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cbc3aa859206d1d9dade15c2a5cc204c99396dab943eafe43a06e2c9f7695a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cbc3aa859206d1d9dade15c2a5cc204c99396dab943eafe43a06e2c9f7695a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cbc3aa859206d1d9dade15c2a5cc204c99396dab943eafe43a06e2c9f7695a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cbc3aa859206d1d9dade15c2a5cc204c99396dab943eafe43a06e2c9f7695a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:16:30 compute-0 podman[103385]: 2026-01-29 09:16:30.037226138 +0000 UTC m=+0.023534627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:16:30 compute-0 podman[103385]: 2026-01-29 09:16:30.145022454 +0000 UTC m=+0.131330943 container init 9e67529d77fffa19c2e82333367149e8b0724143ece7bbfafdde15d2d94284e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_babbage, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 29 09:16:30 compute-0 podman[103385]: 2026-01-29 09:16:30.151212018 +0000 UTC m=+0.137520487 container start 9e67529d77fffa19c2e82333367149e8b0724143ece7bbfafdde15d2d94284e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_babbage, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 09:16:30 compute-0 podman[103385]: 2026-01-29 09:16:30.155935394 +0000 UTC m=+0.142243883 container attach 9e67529d77fffa19c2e82333367149e8b0724143ece7bbfafdde15d2d94284e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_babbage, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 09:16:30 compute-0 sudo[103559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqzvlwxgiccqycozudclzwccdbhgdira ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678190.0839672-417-43334416937849/AnsiballZ_systemd.py'
Jan 29 09:16:30 compute-0 sudo[103559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:30 compute-0 hungry_babbage[103450]: {
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:     "0": [
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:         {
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "devices": [
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "/dev/loop3"
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             ],
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "lv_name": "ceph_lv0",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "lv_size": "21470642176",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "name": "ceph_lv0",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "tags": {
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.cluster_name": "ceph",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.crush_device_class": "",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.encrypted": "0",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.objectstore": "bluestore",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.osd_id": "0",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.type": "block",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.vdo": "0",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.with_tpm": "0"
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             },
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "type": "block",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "vg_name": "ceph_vg0"
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:         }
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:     ],
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:     "1": [
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:         {
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "devices": [
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "/dev/loop4"
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             ],
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "lv_name": "ceph_lv1",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "lv_size": "21470642176",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "name": "ceph_lv1",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "tags": {
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.cluster_name": "ceph",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.crush_device_class": "",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.encrypted": "0",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.objectstore": "bluestore",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.osd_id": "1",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.type": "block",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.vdo": "0",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.with_tpm": "0"
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             },
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "type": "block",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "vg_name": "ceph_vg1"
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:         }
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:     ],
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:     "2": [
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:         {
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "devices": [
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "/dev/loop5"
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             ],
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "lv_name": "ceph_lv2",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "lv_size": "21470642176",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "name": "ceph_lv2",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "tags": {
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.cluster_name": "ceph",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.crush_device_class": "",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.encrypted": "0",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.objectstore": "bluestore",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.osd_id": "2",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.type": "block",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.vdo": "0",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:                 "ceph.with_tpm": "0"
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             },
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "type": "block",
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:             "vg_name": "ceph_vg2"
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:         }
Jan 29 09:16:30 compute-0 hungry_babbage[103450]:     ]
Jan 29 09:16:30 compute-0 hungry_babbage[103450]: }
Jan 29 09:16:30 compute-0 systemd[1]: libpod-9e67529d77fffa19c2e82333367149e8b0724143ece7bbfafdde15d2d94284e5.scope: Deactivated successfully.
Jan 29 09:16:30 compute-0 podman[103385]: 2026-01-29 09:16:30.446322713 +0000 UTC m=+0.432631212 container died 9e67529d77fffa19c2e82333367149e8b0724143ece7bbfafdde15d2d94284e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_babbage, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 29 09:16:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-00cbc3aa859206d1d9dade15c2a5cc204c99396dab943eafe43a06e2c9f7695a-merged.mount: Deactivated successfully.
Jan 29 09:16:30 compute-0 podman[103385]: 2026-01-29 09:16:30.48945836 +0000 UTC m=+0.475766829 container remove 9e67529d77fffa19c2e82333367149e8b0724143ece7bbfafdde15d2d94284e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_babbage, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 29 09:16:30 compute-0 systemd[1]: libpod-conmon-9e67529d77fffa19c2e82333367149e8b0724143ece7bbfafdde15d2d94284e5.scope: Deactivated successfully.
Jan 29 09:16:30 compute-0 sudo[103254]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:30 compute-0 sudo[103576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:16:30 compute-0 sudo[103576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:16:30 compute-0 sudo[103576]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:30 compute-0 sudo[103601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:16:30 compute-0 sudo[103601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:16:30 compute-0 python3.9[103561]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:16:30 compute-0 ceph-mon[75183]: pgmap v174: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:30 compute-0 sudo[103559]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:30 compute-0 podman[103666]: 2026-01-29 09:16:30.942503262 +0000 UTC m=+0.043363714 container create c5db9758597bc0c5f5a53b0fd9155b2d593466ea904b21f4568542f7d28f549b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 29 09:16:30 compute-0 systemd[1]: Started libpod-conmon-c5db9758597bc0c5f5a53b0fd9155b2d593466ea904b21f4568542f7d28f549b.scope.
Jan 29 09:16:31 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:16:31 compute-0 podman[103666]: 2026-01-29 09:16:31.0199256 +0000 UTC m=+0.120786082 container init c5db9758597bc0c5f5a53b0fd9155b2d593466ea904b21f4568542f7d28f549b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jennings, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:16:31 compute-0 podman[103666]: 2026-01-29 09:16:30.925534681 +0000 UTC m=+0.026395163 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:16:31 compute-0 podman[103666]: 2026-01-29 09:16:31.026990628 +0000 UTC m=+0.127851080 container start c5db9758597bc0c5f5a53b0fd9155b2d593466ea904b21f4568542f7d28f549b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jennings, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:16:31 compute-0 podman[103666]: 2026-01-29 09:16:31.03121621 +0000 UTC m=+0.132076652 container attach c5db9758597bc0c5f5a53b0fd9155b2d593466ea904b21f4568542f7d28f549b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:16:31 compute-0 amazing_jennings[103682]: 167 167
Jan 29 09:16:31 compute-0 systemd[1]: libpod-c5db9758597bc0c5f5a53b0fd9155b2d593466ea904b21f4568542f7d28f549b.scope: Deactivated successfully.
Jan 29 09:16:31 compute-0 podman[103666]: 2026-01-29 09:16:31.033594334 +0000 UTC m=+0.134454796 container died c5db9758597bc0c5f5a53b0fd9155b2d593466ea904b21f4568542f7d28f549b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jennings, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 09:16:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-48551d72f283a92a3210c84ed750932ffe767bd8f31c27de34f1e7ae2c3de29c-merged.mount: Deactivated successfully.
Jan 29 09:16:31 compute-0 podman[103666]: 2026-01-29 09:16:31.076448273 +0000 UTC m=+0.177308725 container remove c5db9758597bc0c5f5a53b0fd9155b2d593466ea904b21f4568542f7d28f549b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jennings, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:16:31 compute-0 systemd[1]: libpod-conmon-c5db9758597bc0c5f5a53b0fd9155b2d593466ea904b21f4568542f7d28f549b.scope: Deactivated successfully.
Jan 29 09:16:31 compute-0 podman[103707]: 2026-01-29 09:16:31.198638611 +0000 UTC m=+0.045004617 container create ccad6357bf39ad79e9976aa1f47f278f2a52795a0c7d939cf3813f3e25c29637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lumiere, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:16:31 compute-0 systemd[1]: Started libpod-conmon-ccad6357bf39ad79e9976aa1f47f278f2a52795a0c7d939cf3813f3e25c29637.scope.
Jan 29 09:16:31 compute-0 podman[103707]: 2026-01-29 09:16:31.176604375 +0000 UTC m=+0.022970351 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:16:31 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:16:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f109f5ffafa059953144a0264b47bfad263617f3c911e7c0ed2eceb087c37a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:16:31 compute-0 sshd-session[96497]: Connection closed by 192.168.122.30 port 57714
Jan 29 09:16:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f109f5ffafa059953144a0264b47bfad263617f3c911e7c0ed2eceb087c37a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:16:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f109f5ffafa059953144a0264b47bfad263617f3c911e7c0ed2eceb087c37a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:16:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f109f5ffafa059953144a0264b47bfad263617f3c911e7c0ed2eceb087c37a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:16:31 compute-0 sshd-session[96494]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:16:31 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Jan 29 09:16:31 compute-0 systemd[1]: session-34.scope: Consumed 1min 6.182s CPU time.
Jan 29 09:16:31 compute-0 systemd-logind[799]: Session 34 logged out. Waiting for processes to exit.
Jan 29 09:16:31 compute-0 systemd-logind[799]: Removed session 34.
Jan 29 09:16:31 compute-0 podman[103707]: 2026-01-29 09:16:31.295226338 +0000 UTC m=+0.141592314 container init ccad6357bf39ad79e9976aa1f47f278f2a52795a0c7d939cf3813f3e25c29637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 29 09:16:31 compute-0 podman[103707]: 2026-01-29 09:16:31.304059453 +0000 UTC m=+0.150425419 container start ccad6357bf39ad79e9976aa1f47f278f2a52795a0c7d939cf3813f3e25c29637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lumiere, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:16:31 compute-0 podman[103707]: 2026-01-29 09:16:31.310272458 +0000 UTC m=+0.156638424 container attach ccad6357bf39ad79e9976aa1f47f278f2a52795a0c7d939cf3813f3e25c29637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lumiere, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:16:31 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 29 09:16:31 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 29 09:16:31 compute-0 ceph-mon[75183]: 5.4 scrub starts
Jan 29 09:16:31 compute-0 ceph-mon[75183]: 5.4 scrub ok
Jan 29 09:16:31 compute-0 lvm[103803]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:16:31 compute-0 lvm[103802]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:16:31 compute-0 lvm[103803]: VG ceph_vg1 finished
Jan 29 09:16:31 compute-0 lvm[103802]: VG ceph_vg0 finished
Jan 29 09:16:31 compute-0 lvm[103805]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:16:31 compute-0 lvm[103805]: VG ceph_vg2 finished
Jan 29 09:16:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v175: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:32 compute-0 lucid_lumiere[103724]: {}
Jan 29 09:16:32 compute-0 systemd[1]: libpod-ccad6357bf39ad79e9976aa1f47f278f2a52795a0c7d939cf3813f3e25c29637.scope: Deactivated successfully.
Jan 29 09:16:32 compute-0 systemd[1]: libpod-ccad6357bf39ad79e9976aa1f47f278f2a52795a0c7d939cf3813f3e25c29637.scope: Consumed 1.229s CPU time.
Jan 29 09:16:32 compute-0 podman[103707]: 2026-01-29 09:16:32.103256508 +0000 UTC m=+0.949622474 container died ccad6357bf39ad79e9976aa1f47f278f2a52795a0c7d939cf3813f3e25c29637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lumiere, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 09:16:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f109f5ffafa059953144a0264b47bfad263617f3c911e7c0ed2eceb087c37a1-merged.mount: Deactivated successfully.
Jan 29 09:16:32 compute-0 podman[103707]: 2026-01-29 09:16:32.148714587 +0000 UTC m=+0.995080563 container remove ccad6357bf39ad79e9976aa1f47f278f2a52795a0c7d939cf3813f3e25c29637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:16:32 compute-0 systemd[1]: libpod-conmon-ccad6357bf39ad79e9976aa1f47f278f2a52795a0c7d939cf3813f3e25c29637.scope: Deactivated successfully.
Jan 29 09:16:32 compute-0 sudo[103601]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:16:32 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:16:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:16:32 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:16:32 compute-0 sudo[103821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:16:32 compute-0 sudo[103821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:16:32 compute-0 sudo[103821]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:32 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 29 09:16:32 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 29 09:16:32 compute-0 ceph-mon[75183]: pgmap v175: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:32 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:16:32 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:16:32 compute-0 ceph-mon[75183]: 3.1d scrub starts
Jan 29 09:16:32 compute-0 ceph-mon[75183]: 3.1d scrub ok
Jan 29 09:16:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:16:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v176: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:34 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 29 09:16:34 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 29 09:16:35 compute-0 ceph-mon[75183]: pgmap v176: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:35 compute-0 ceph-mon[75183]: 5.7 scrub starts
Jan 29 09:16:35 compute-0 ceph-mon[75183]: 5.7 scrub ok
Jan 29 09:16:35 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 29 09:16:35 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 29 09:16:35 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 29 09:16:35 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 29 09:16:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v177: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:36 compute-0 ceph-mon[75183]: 3.1e scrub starts
Jan 29 09:16:36 compute-0 ceph-mon[75183]: 3.1e scrub ok
Jan 29 09:16:36 compute-0 ceph-mon[75183]: 7.9 scrub starts
Jan 29 09:16:36 compute-0 ceph-mon[75183]: 7.9 scrub ok
Jan 29 09:16:36 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 29 09:16:36 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 29 09:16:37 compute-0 ceph-mon[75183]: pgmap v177: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:37 compute-0 ceph-mon[75183]: 3.1 scrub starts
Jan 29 09:16:37 compute-0 ceph-mon[75183]: 3.1 scrub ok
Jan 29 09:16:37 compute-0 sshd-session[103846]: Accepted publickey for zuul from 192.168.122.30 port 43270 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:16:37 compute-0 systemd-logind[799]: New session 35 of user zuul.
Jan 29 09:16:37 compute-0 systemd[1]: Started Session 35 of User zuul.
Jan 29 09:16:37 compute-0 sshd-session[103846]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:16:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 29 09:16:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 29 09:16:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v178: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:38 compute-0 ceph-mon[75183]: 3.5 scrub starts
Jan 29 09:16:38 compute-0 ceph-mon[75183]: 3.5 scrub ok
Jan 29 09:16:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:16:38 compute-0 python3.9[103999]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:16:38 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 29 09:16:38 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 29 09:16:39 compute-0 ceph-mon[75183]: pgmap v178: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:39 compute-0 ceph-mon[75183]: 7.3 scrub starts
Jan 29 09:16:39 compute-0 ceph-mon[75183]: 7.3 scrub ok
Jan 29 09:16:39 compute-0 sudo[104153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkmmwiarxbmezuuhwczgshohrmjfvirv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678198.913126-31-206127919231241/AnsiballZ_getent.py'
Jan 29 09:16:39 compute-0 sudo[104153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:39 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 29 09:16:39 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 29 09:16:39 compute-0 python3.9[104155]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 29 09:16:39 compute-0 sudo[104153]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v179: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:40 compute-0 ceph-mon[75183]: 3.8 scrub starts
Jan 29 09:16:40 compute-0 ceph-mon[75183]: 3.8 scrub ok
Jan 29 09:16:40 compute-0 sudo[104306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmghcitjaqcnzsojnrrsgvnjwujwitxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678199.8808942-43-107007592350135/AnsiballZ_setup.py'
Jan 29 09:16:40 compute-0 sudo[104306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:40 compute-0 python3.9[104308]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 09:16:40 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 29 09:16:40 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 29 09:16:40 compute-0 sudo[104306]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:40 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 29 09:16:40 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 29 09:16:41 compute-0 ceph-mon[75183]: pgmap v179: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:41 compute-0 ceph-mon[75183]: 2.1d scrub starts
Jan 29 09:16:41 compute-0 ceph-mon[75183]: 2.1d scrub ok
Jan 29 09:16:41 compute-0 sudo[104390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smvnykoovjjzmyiovywiwsprnscckcdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678199.8808942-43-107007592350135/AnsiballZ_dnf.py'
Jan 29 09:16:41 compute-0 sudo[104390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:41 compute-0 python3.9[104392]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 29 09:16:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v180: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:42 compute-0 ceph-mon[75183]: 5.16 scrub starts
Jan 29 09:16:42 compute-0 ceph-mon[75183]: 5.16 scrub ok
Jan 29 09:16:42 compute-0 sudo[104390]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:43 compute-0 ceph-mon[75183]: pgmap v180: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:16:43 compute-0 sudo[104543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmgipxzdvdhegjxfftcfkxkjozlwyjqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678203.0765479-57-219929498209953/AnsiballZ_dnf.py'
Jan 29 09:16:43 compute-0 sudo[104543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:43 compute-0 python3.9[104545]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:16:43 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 29 09:16:43 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 29 09:16:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v181: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:44 compute-0 ceph-mon[75183]: pgmap v181: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:44 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 29 09:16:44 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 29 09:16:44 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 29 09:16:44 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 29 09:16:45 compute-0 sudo[104543]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:45 compute-0 ceph-mon[75183]: 3.c scrub starts
Jan 29 09:16:45 compute-0 ceph-mon[75183]: 3.c scrub ok
Jan 29 09:16:45 compute-0 ceph-mon[75183]: 7.1f scrub starts
Jan 29 09:16:45 compute-0 ceph-mon[75183]: 7.1f scrub ok
Jan 29 09:16:45 compute-0 sudo[104696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lekyqcmpazqwjnbebsrboheaqoygidpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678205.1711462-65-188290443272282/AnsiballZ_systemd.py'
Jan 29 09:16:45 compute-0 sudo[104696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v182: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:46 compute-0 python3.9[104698]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 29 09:16:46 compute-0 sudo[104696]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:46 compute-0 ceph-mon[75183]: 2.d scrub starts
Jan 29 09:16:46 compute-0 ceph-mon[75183]: 2.d scrub ok
Jan 29 09:16:46 compute-0 ceph-mon[75183]: pgmap v182: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:47 compute-0 python3.9[104851]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:16:47 compute-0 sudo[105001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vieenkrzzvxjyuagacgpmhuusnrtoofb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678207.2234704-83-42485747304338/AnsiballZ_sefcontext.py'
Jan 29 09:16:47 compute-0 sudo[105001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:47 compute-0 python3.9[105003]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 29 09:16:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v183: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:48 compute-0 sudo[105001]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:16:48 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 29 09:16:48 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 29 09:16:48 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 29 09:16:48 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 29 09:16:48 compute-0 python3.9[105153]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:16:49 compute-0 ceph-mon[75183]: pgmap v183: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:49 compute-0 ceph-mon[75183]: 3.1b scrub starts
Jan 29 09:16:49 compute-0 ceph-mon[75183]: 3.1b scrub ok
Jan 29 09:16:49 compute-0 sudo[105309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mioflespienkuqubphddlsyzictterge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678209.2598865-101-112142161458889/AnsiballZ_dnf.py'
Jan 29 09:16:49 compute-0 sudo[105309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:49 compute-0 python3.9[105311]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:16:49 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 29 09:16:49 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 29 09:16:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v184: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:50 compute-0 ceph-mon[75183]: 2.5 scrub starts
Jan 29 09:16:50 compute-0 ceph-mon[75183]: 2.5 scrub ok
Jan 29 09:16:51 compute-0 sudo[105309]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:51 compute-0 ceph-mon[75183]: 2.4 scrub starts
Jan 29 09:16:51 compute-0 ceph-mon[75183]: 2.4 scrub ok
Jan 29 09:16:51 compute-0 ceph-mon[75183]: pgmap v184: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:51 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 29 09:16:51 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 29 09:16:51 compute-0 sudo[105462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spvrfonsbnhuzuaaueallvasdgilqzig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678211.2961729-109-68851063602142/AnsiballZ_command.py'
Jan 29 09:16:51 compute-0 sudo[105462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:51 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 29 09:16:51 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 29 09:16:51 compute-0 python3.9[105464]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:16:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v185: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:52 compute-0 ceph-mon[75183]: 2.18 scrub starts
Jan 29 09:16:52 compute-0 ceph-mon[75183]: 2.18 scrub ok
Jan 29 09:16:52 compute-0 sudo[105462]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:52 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 29 09:16:52 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 29 09:16:53 compute-0 sudo[105749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seglmffhhymfzudtjuvylcfwubcylkfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678212.744041-117-185267757667568/AnsiballZ_file.py'
Jan 29 09:16:53 compute-0 sudo[105749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:53 compute-0 ceph-mon[75183]: 2.17 scrub starts
Jan 29 09:16:53 compute-0 ceph-mon[75183]: 2.17 scrub ok
Jan 29 09:16:53 compute-0 ceph-mon[75183]: pgmap v185: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:53 compute-0 ceph-mon[75183]: 7.4 scrub starts
Jan 29 09:16:53 compute-0 ceph-mon[75183]: 7.4 scrub ok
Jan 29 09:16:53 compute-0 python3.9[105751]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 29 09:16:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:16:53 compute-0 sudo[105749]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:53 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 29 09:16:53 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 29 09:16:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v186: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:54 compute-0 python3.9[105901]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:16:54 compute-0 sudo[106053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqyttoqtdrjkcvjnlqjcpuxgqwcnxcom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678214.229802-133-79845032151682/AnsiballZ_dnf.py'
Jan 29 09:16:54 compute-0 sudo[106053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:54 compute-0 python3.9[106055]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:16:54 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 29 09:16:54 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 29 09:16:55 compute-0 ceph-mon[75183]: 2.7 scrub starts
Jan 29 09:16:55 compute-0 ceph-mon[75183]: 2.7 scrub ok
Jan 29 09:16:55 compute-0 ceph-mon[75183]: pgmap v186: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:55 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 29 09:16:55 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 29 09:16:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:16:55
Jan 29 09:16:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:16:55 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:16:55 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['.mgr', 'backups', 'vms', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images']
Jan 29 09:16:55 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:16:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v187: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:56 compute-0 sudo[106053]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:56 compute-0 ceph-mon[75183]: 4.10 scrub starts
Jan 29 09:16:56 compute-0 ceph-mon[75183]: 4.10 scrub ok
Jan 29 09:16:56 compute-0 sudo[106206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlgulvcodypszygwozzbjsnwtfqfawgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678216.2408295-142-267967196925131/AnsiballZ_dnf.py'
Jan 29 09:16:56 compute-0 sudo[106206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:16:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:16:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:16:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:16:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:16:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:16:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:16:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:16:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:16:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:16:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:16:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:16:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:16:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:16:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:16:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:16:56 compute-0 python3.9[106208]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:16:56 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 29 09:16:56 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 29 09:16:56 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Jan 29 09:16:56 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Jan 29 09:16:57 compute-0 ceph-mon[75183]: 5.1e scrub starts
Jan 29 09:16:57 compute-0 ceph-mon[75183]: 5.1e scrub ok
Jan 29 09:16:57 compute-0 ceph-mon[75183]: pgmap v187: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:57 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 29 09:16:57 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 29 09:16:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v188: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:16:58 compute-0 sudo[106206]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:58 compute-0 ceph-mon[75183]: 2.19 scrub starts
Jan 29 09:16:58 compute-0 ceph-mon[75183]: 2.19 scrub ok
Jan 29 09:16:58 compute-0 ceph-mon[75183]: 6.1d scrub starts
Jan 29 09:16:58 compute-0 ceph-mon[75183]: 6.1d scrub ok
Jan 29 09:16:58 compute-0 ceph-mon[75183]: 3.f scrub starts
Jan 29 09:16:58 compute-0 ceph-mon[75183]: 3.f scrub ok
Jan 29 09:16:58 compute-0 ceph-mon[75183]: pgmap v188: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:16:58 compute-0 sudo[106359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prwcgqnzrokprcskbljqauedklwbwijp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678218.7340071-154-272598580518828/AnsiballZ_stat.py'
Jan 29 09:16:58 compute-0 sudo[106359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:59 compute-0 python3.9[106361]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:16:59 compute-0 sudo[106359]: pam_unix(sudo:session): session closed for user root
Jan 29 09:16:59 compute-0 sudo[106513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miuzrdkwzhhwqyirgnnflrskhbgypjsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678219.3209593-162-5570561378391/AnsiballZ_slurp.py'
Jan 29 09:16:59 compute-0 sudo[106513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:16:59 compute-0 python3.9[106515]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 29 09:16:59 compute-0 sudo[106513]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v189: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:00 compute-0 sshd-session[103849]: Connection closed by 192.168.122.30 port 43270
Jan 29 09:17:00 compute-0 sshd-session[103846]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:17:00 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Jan 29 09:17:00 compute-0 systemd[1]: session-35.scope: Consumed 17.417s CPU time.
Jan 29 09:17:00 compute-0 systemd-logind[799]: Session 35 logged out. Waiting for processes to exit.
Jan 29 09:17:00 compute-0 systemd-logind[799]: Removed session 35.
Jan 29 09:17:01 compute-0 ceph-mon[75183]: pgmap v189: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:17:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:17:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:17:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:17:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:17:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:17:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:17:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:17:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:17:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:17:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:17:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:17:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0578630957479565e-06 of space, bias 4.0, pg target 0.0012694357148975478 quantized to 16 (current 32)
Jan 29 09:17:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:17:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:17:01 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 29 09:17:01 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 29 09:17:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v190: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:03 compute-0 ceph-mon[75183]: 2.6 scrub starts
Jan 29 09:17:03 compute-0 ceph-mon[75183]: 2.6 scrub ok
Jan 29 09:17:03 compute-0 ceph-mon[75183]: pgmap v190: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:17:03 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 29 09:17:03 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 29 09:17:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v191: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:05 compute-0 ceph-mon[75183]: 3.17 scrub starts
Jan 29 09:17:05 compute-0 ceph-mon[75183]: 3.17 scrub ok
Jan 29 09:17:05 compute-0 ceph-mon[75183]: pgmap v191: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v192: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:06 compute-0 sshd-session[106540]: Accepted publickey for zuul from 192.168.122.30 port 59538 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:17:06 compute-0 systemd-logind[799]: New session 36 of user zuul.
Jan 29 09:17:06 compute-0 systemd[1]: Started Session 36 of User zuul.
Jan 29 09:17:06 compute-0 sshd-session[106540]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:17:07 compute-0 ceph-mon[75183]: pgmap v192: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:07 compute-0 python3.9[106693]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:17:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v193: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:08 compute-0 python3.9[106847]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 09:17:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:17:08 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 29 09:17:08 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 29 09:17:09 compute-0 ceph-mon[75183]: pgmap v193: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:09 compute-0 python3.9[107040]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:17:09 compute-0 sshd-session[106543]: Connection closed by 192.168.122.30 port 59538
Jan 29 09:17:09 compute-0 sshd-session[106540]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:17:10 compute-0 systemd-logind[799]: Session 36 logged out. Waiting for processes to exit.
Jan 29 09:17:10 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Jan 29 09:17:10 compute-0 systemd[1]: session-36.scope: Consumed 2.146s CPU time.
Jan 29 09:17:10 compute-0 systemd-logind[799]: Removed session 36.
Jan 29 09:17:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v194: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:10 compute-0 ceph-mon[75183]: 5.1 scrub starts
Jan 29 09:17:10 compute-0 ceph-mon[75183]: 5.1 scrub ok
Jan 29 09:17:11 compute-0 ceph-mon[75183]: pgmap v194: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v195: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:12 compute-0 ceph-mon[75183]: pgmap v195: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:17:13 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 29 09:17:13 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 29 09:17:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v196: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:15 compute-0 ceph-mon[75183]: 2.9 scrub starts
Jan 29 09:17:15 compute-0 ceph-mon[75183]: 2.9 scrub ok
Jan 29 09:17:15 compute-0 ceph-mon[75183]: pgmap v196: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:15 compute-0 sshd-session[107067]: Accepted publickey for zuul from 192.168.122.30 port 48740 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:17:15 compute-0 systemd-logind[799]: New session 37 of user zuul.
Jan 29 09:17:15 compute-0 systemd[1]: Started Session 37 of User zuul.
Jan 29 09:17:15 compute-0 sshd-session[107067]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:17:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v197: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:16 compute-0 ceph-mon[75183]: pgmap v197: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:16 compute-0 python3.9[107220]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:17:16 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 29 09:17:16 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 29 09:17:17 compute-0 ceph-mon[75183]: 5.f scrub starts
Jan 29 09:17:17 compute-0 ceph-mon[75183]: 5.f scrub ok
Jan 29 09:17:17 compute-0 python3.9[107374]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:17:17 compute-0 sudo[107528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alsojdlurhwdlnpvsouazwlzmmvfdaym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678237.7085695-35-173328402971677/AnsiballZ_setup.py'
Jan 29 09:17:17 compute-0 sudo[107528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v198: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:18 compute-0 python3.9[107530]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 09:17:18 compute-0 ceph-mon[75183]: pgmap v198: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:17:18 compute-0 sudo[107528]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:18 compute-0 sudo[107612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlcqaknzexdaxurypszmlmasbtqiiupr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678237.7085695-35-173328402971677/AnsiballZ_dnf.py'
Jan 29 09:17:18 compute-0 sudo[107612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:19 compute-0 python3.9[107614]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:17:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v199: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:20 compute-0 sudo[107612]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:20 compute-0 sudo[107765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilsjhsyjiiscqmbxbdphlxscinvsmynw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678240.6813304-47-62312516582063/AnsiballZ_setup.py'
Jan 29 09:17:21 compute-0 sudo[107765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:21 compute-0 ceph-mon[75183]: pgmap v199: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:21 compute-0 python3.9[107767]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 09:17:21 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 29 09:17:21 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 29 09:17:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v200: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:22 compute-0 sudo[107765]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:22 compute-0 ceph-mon[75183]: 5.1a scrub starts
Jan 29 09:17:22 compute-0 ceph-mon[75183]: 5.1a scrub ok
Jan 29 09:17:22 compute-0 ceph-mon[75183]: pgmap v200: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:22 compute-0 sudo[107960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjbsfpmxzqirhibtpfxrixbckfzshjme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678242.4402437-58-206001135228678/AnsiballZ_file.py'
Jan 29 09:17:22 compute-0 sudo[107960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:23 compute-0 python3.9[107962]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:17:23 compute-0 sudo[107960]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:17:23 compute-0 sudo[108112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esxrnymwctulgumkavcihjolmmnaezto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678243.2460465-66-85926283925069/AnsiballZ_command.py'
Jan 29 09:17:23 compute-0 sudo[108112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:23 compute-0 python3.9[108114]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:17:23 compute-0 sudo[108112]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v201: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:24 compute-0 ceph-mon[75183]: pgmap v201: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:24 compute-0 sudo[108277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-depahipdhoaiwsjuzuvymfzcgukwgggk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678244.0437875-74-26626669266928/AnsiballZ_stat.py'
Jan 29 09:17:24 compute-0 sudo[108277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:24 compute-0 python3.9[108279]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:17:24 compute-0 sudo[108277]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:24 compute-0 sudo[108355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwaqnzvrbrjbnsxodplgxfolrzjobsjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678244.0437875-74-26626669266928/AnsiballZ_file.py'
Jan 29 09:17:24 compute-0 sudo[108355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:24 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 29 09:17:24 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 29 09:17:25 compute-0 python3.9[108357]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:17:25 compute-0 sudo[108355]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:25 compute-0 ceph-mon[75183]: 2.3 scrub starts
Jan 29 09:17:25 compute-0 ceph-mon[75183]: 2.3 scrub ok
Jan 29 09:17:25 compute-0 sudo[108507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pijqlvxdtdldhsxiwzgcngskkawphwkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678245.261311-86-230420226426312/AnsiballZ_stat.py'
Jan 29 09:17:25 compute-0 sudo[108507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:25 compute-0 python3.9[108509]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:17:25 compute-0 sudo[108507]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:25 compute-0 sudo[108585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwdzqcshbjhansgchdpkokitryyudjun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678245.261311-86-230420226426312/AnsiballZ_file.py'
Jan 29 09:17:25 compute-0 sudo[108585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v202: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:26 compute-0 python3.9[108587]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:17:26 compute-0 sudo[108585]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:26 compute-0 ceph-mon[75183]: pgmap v202: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:17:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:17:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:17:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:17:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:17:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:17:26 compute-0 sudo[108737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrymzsxsfurfmxjdvkeimkoexziwnkmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678246.361478-99-135816161123463/AnsiballZ_ini_file.py'
Jan 29 09:17:26 compute-0 sudo[108737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:26 compute-0 python3.9[108739]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:17:26 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 29 09:17:27 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 29 09:17:27 compute-0 sudo[108737]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:27 compute-0 ceph-mon[75183]: 5.1d scrub starts
Jan 29 09:17:27 compute-0 ceph-mon[75183]: 5.1d scrub ok
Jan 29 09:17:27 compute-0 sudo[108889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxhtpbkkdrukvausuppnxbeayiomyvqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678247.2280707-99-20943310865307/AnsiballZ_ini_file.py'
Jan 29 09:17:27 compute-0 sudo[108889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:27 compute-0 python3.9[108891]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:17:27 compute-0 sudo[108889]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v203: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:28 compute-0 sudo[109041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xibumilmxllcmronoxktlxjqgmezovpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678247.840659-99-37743952970707/AnsiballZ_ini_file.py'
Jan 29 09:17:28 compute-0 sudo[109041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:28 compute-0 python3.9[109043]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:17:28 compute-0 sudo[109041]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:28 compute-0 ceph-mon[75183]: pgmap v203: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:17:28 compute-0 sudo[109193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtqxtbnqrutblpjfypkfzddfpplzpxbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678248.4008493-99-177763360882569/AnsiballZ_ini_file.py'
Jan 29 09:17:28 compute-0 sudo[109193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:28 compute-0 python3.9[109195]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:17:28 compute-0 sudo[109193]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:29 compute-0 sudo[109345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsjqkhkdnuknecwlowvlipstghiblgoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678249.080734-130-200911632613453/AnsiballZ_dnf.py'
Jan 29 09:17:29 compute-0 sudo[109345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:29 compute-0 python3.9[109347]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:17:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v204: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:30 compute-0 ceph-mon[75183]: pgmap v204: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:31 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 29 09:17:31 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 29 09:17:31 compute-0 sudo[109345]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:31 compute-0 ceph-mon[75183]: 5.19 scrub starts
Jan 29 09:17:31 compute-0 ceph-mon[75183]: 5.19 scrub ok
Jan 29 09:17:32 compute-0 sudo[109498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeaswuhqzzpqypdrztcfqsmwvdxcbdka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678251.5333085-141-118349818949999/AnsiballZ_setup.py'
Jan 29 09:17:32 compute-0 sudo[109498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:32 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 29 09:17:32 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 29 09:17:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v205: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:32 compute-0 ceph-mon[75183]: 5.c scrub starts
Jan 29 09:17:32 compute-0 python3.9[109500]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:17:32 compute-0 ceph-mon[75183]: 5.c scrub ok
Jan 29 09:17:32 compute-0 ceph-mon[75183]: pgmap v205: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:32 compute-0 sudo[109501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:17:32 compute-0 sudo[109501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:17:32 compute-0 sudo[109501]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:32 compute-0 sudo[109498]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:32 compute-0 sudo[109528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:17:32 compute-0 sudo[109528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:17:32 compute-0 sudo[109722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebhtfpcwaazxskzyvouhlkcrhouzguco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678252.5162477-149-79974184000655/AnsiballZ_stat.py'
Jan 29 09:17:32 compute-0 sudo[109722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:32 compute-0 sudo[109528]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:17:32 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:17:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:17:32 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:17:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:17:32 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:17:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:17:32 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:17:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:17:32 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:17:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:17:32 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:17:32 compute-0 sudo[109737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:17:32 compute-0 sudo[109737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:17:32 compute-0 sudo[109737]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:32 compute-0 python3.9[109730]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:17:33 compute-0 sudo[109762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:17:33 compute-0 sudo[109722]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:33 compute-0 sudo[109762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:17:33 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 29 09:17:33 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 29 09:17:33 compute-0 podman[109876]: 2026-01-29 09:17:33.289970834 +0000 UTC m=+0.042532592 container create ca78564315918091bf7631fb265d55b6ebdc1173220fbec19d9b7133d429cecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 09:17:33 compute-0 systemd[76570]: Created slice User Background Tasks Slice.
Jan 29 09:17:33 compute-0 systemd[76570]: Starting Cleanup of User's Temporary Files and Directories...
Jan 29 09:17:33 compute-0 systemd[76570]: Finished Cleanup of User's Temporary Files and Directories.
Jan 29 09:17:33 compute-0 systemd[1]: Started libpod-conmon-ca78564315918091bf7631fb265d55b6ebdc1173220fbec19d9b7133d429cecc.scope.
Jan 29 09:17:33 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:17:33 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:17:33 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:17:33 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:17:33 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:17:33 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:17:33 compute-0 ceph-mon[75183]: 5.18 scrub starts
Jan 29 09:17:33 compute-0 ceph-mon[75183]: 5.18 scrub ok
Jan 29 09:17:33 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:17:33 compute-0 podman[109876]: 2026-01-29 09:17:33.358953084 +0000 UTC m=+0.111514862 container init ca78564315918091bf7631fb265d55b6ebdc1173220fbec19d9b7133d429cecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:17:33 compute-0 podman[109876]: 2026-01-29 09:17:33.269701246 +0000 UTC m=+0.022263024 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:17:33 compute-0 podman[109876]: 2026-01-29 09:17:33.366574274 +0000 UTC m=+0.119136032 container start ca78564315918091bf7631fb265d55b6ebdc1173220fbec19d9b7133d429cecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_chatelet, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 29 09:17:33 compute-0 podman[109876]: 2026-01-29 09:17:33.37117852 +0000 UTC m=+0.123740298 container attach ca78564315918091bf7631fb265d55b6ebdc1173220fbec19d9b7133d429cecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_chatelet, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:17:33 compute-0 awesome_chatelet[109916]: 167 167
Jan 29 09:17:33 compute-0 systemd[1]: libpod-ca78564315918091bf7631fb265d55b6ebdc1173220fbec19d9b7133d429cecc.scope: Deactivated successfully.
Jan 29 09:17:33 compute-0 podman[109876]: 2026-01-29 09:17:33.373322489 +0000 UTC m=+0.125884247 container died ca78564315918091bf7631fb265d55b6ebdc1173220fbec19d9b7133d429cecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_chatelet, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:17:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bfc1fed5c4dba85238753d548a5cfbf2a7a6f31b99285bfdf2cdf5db401b445-merged.mount: Deactivated successfully.
Jan 29 09:17:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:17:33 compute-0 podman[109876]: 2026-01-29 09:17:33.414804032 +0000 UTC m=+0.167365790 container remove ca78564315918091bf7631fb265d55b6ebdc1173220fbec19d9b7133d429cecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_chatelet, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:17:33 compute-0 systemd[1]: libpod-conmon-ca78564315918091bf7631fb265d55b6ebdc1173220fbec19d9b7133d429cecc.scope: Deactivated successfully.
Jan 29 09:17:33 compute-0 sudo[109981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydmaacezrenmhxrcmcpthflqvonpbsbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678253.169235-158-170386144521961/AnsiballZ_stat.py'
Jan 29 09:17:33 compute-0 sudo[109981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:33 compute-0 podman[109991]: 2026-01-29 09:17:33.538067776 +0000 UTC m=+0.041788582 container create db704610128560652b1730b2444c58e5a1328e03aaac55404b1f9a0b45a0d30f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 29 09:17:33 compute-0 systemd[1]: Started libpod-conmon-db704610128560652b1730b2444c58e5a1328e03aaac55404b1f9a0b45a0d30f.scope.
Jan 29 09:17:33 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:17:33 compute-0 podman[109991]: 2026-01-29 09:17:33.521369036 +0000 UTC m=+0.025089872 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:17:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f074690c9d09c9b61bcd6ff71e29b0951a0378568f049ac5be0a0b6705d41e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:17:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f074690c9d09c9b61bcd6ff71e29b0951a0378568f049ac5be0a0b6705d41e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:17:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f074690c9d09c9b61bcd6ff71e29b0951a0378568f049ac5be0a0b6705d41e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:17:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f074690c9d09c9b61bcd6ff71e29b0951a0378568f049ac5be0a0b6705d41e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:17:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f074690c9d09c9b61bcd6ff71e29b0951a0378568f049ac5be0a0b6705d41e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:17:33 compute-0 python3.9[109985]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:17:33 compute-0 podman[109991]: 2026-01-29 09:17:33.643227702 +0000 UTC m=+0.146948518 container init db704610128560652b1730b2444c58e5a1328e03aaac55404b1f9a0b45a0d30f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kirch, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:17:33 compute-0 podman[109991]: 2026-01-29 09:17:33.651730816 +0000 UTC m=+0.155451622 container start db704610128560652b1730b2444c58e5a1328e03aaac55404b1f9a0b45a0d30f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kirch, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 09:17:33 compute-0 podman[109991]: 2026-01-29 09:17:33.655950042 +0000 UTC m=+0.159670868 container attach db704610128560652b1730b2444c58e5a1328e03aaac55404b1f9a0b45a0d30f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kirch, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:17:33 compute-0 sudo[109981]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v206: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:34 compute-0 sleepy_kirch[110007]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:17:34 compute-0 sleepy_kirch[110007]: --> All data devices are unavailable
Jan 29 09:17:34 compute-0 systemd[1]: libpod-db704610128560652b1730b2444c58e5a1328e03aaac55404b1f9a0b45a0d30f.scope: Deactivated successfully.
Jan 29 09:17:34 compute-0 podman[109991]: 2026-01-29 09:17:34.127182649 +0000 UTC m=+0.630903445 container died db704610128560652b1730b2444c58e5a1328e03aaac55404b1f9a0b45a0d30f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:17:34 compute-0 sudo[110176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkgamhvikyzkgtqzuariknwymkqpxlew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678253.8833046-168-178729627788002/AnsiballZ_command.py'
Jan 29 09:17:34 compute-0 sudo[110176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:34 compute-0 python3.9[110179]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:17:34 compute-0 sudo[110176]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4f074690c9d09c9b61bcd6ff71e29b0951a0378568f049ac5be0a0b6705d41e-merged.mount: Deactivated successfully.
Jan 29 09:17:34 compute-0 ceph-mon[75183]: pgmap v206: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:34 compute-0 podman[109991]: 2026-01-29 09:17:34.415466308 +0000 UTC m=+0.919187124 container remove db704610128560652b1730b2444c58e5a1328e03aaac55404b1f9a0b45a0d30f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kirch, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 09:17:34 compute-0 systemd[1]: libpod-conmon-db704610128560652b1730b2444c58e5a1328e03aaac55404b1f9a0b45a0d30f.scope: Deactivated successfully.
Jan 29 09:17:34 compute-0 sudo[109762]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:34 compute-0 sudo[110215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:17:34 compute-0 sudo[110215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:17:34 compute-0 sudo[110215]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:34 compute-0 sudo[110240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:17:34 compute-0 sudo[110240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:17:34 compute-0 podman[110329]: 2026-01-29 09:17:34.902281284 +0000 UTC m=+0.093407973 container create 1f4aa62fa31eb9591b93d2256eb29d4216eb5c723a8cf0d9ba237d388611686e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hamilton, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 09:17:34 compute-0 podman[110329]: 2026-01-29 09:17:34.831336171 +0000 UTC m=+0.022462890 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:17:35 compute-0 systemd[1]: Started libpod-conmon-1f4aa62fa31eb9591b93d2256eb29d4216eb5c723a8cf0d9ba237d388611686e.scope.
Jan 29 09:17:35 compute-0 sudo[110416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdokzjpdlgqwjvwquuedqrintexwzyqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678254.5800905-178-17474844481304/AnsiballZ_service_facts.py'
Jan 29 09:17:35 compute-0 sudo[110416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:35 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:17:35 compute-0 podman[110329]: 2026-01-29 09:17:35.062968088 +0000 UTC m=+0.254094807 container init 1f4aa62fa31eb9591b93d2256eb29d4216eb5c723a8cf0d9ba237d388611686e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hamilton, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:17:35 compute-0 podman[110329]: 2026-01-29 09:17:35.071451002 +0000 UTC m=+0.262577691 container start 1f4aa62fa31eb9591b93d2256eb29d4216eb5c723a8cf0d9ba237d388611686e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 09:17:35 compute-0 elegant_hamilton[110420]: 167 167
Jan 29 09:17:35 compute-0 systemd[1]: libpod-1f4aa62fa31eb9591b93d2256eb29d4216eb5c723a8cf0d9ba237d388611686e.scope: Deactivated successfully.
Jan 29 09:17:35 compute-0 podman[110329]: 2026-01-29 09:17:35.089075317 +0000 UTC m=+0.280202016 container attach 1f4aa62fa31eb9591b93d2256eb29d4216eb5c723a8cf0d9ba237d388611686e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hamilton, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:17:35 compute-0 podman[110329]: 2026-01-29 09:17:35.08954121 +0000 UTC m=+0.280667899 container died 1f4aa62fa31eb9591b93d2256eb29d4216eb5c723a8cf0d9ba237d388611686e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hamilton, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 09:17:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e951afa83dc94ffa935514b1000ccd0a17bbbf82bb9418aca1a1218b9303a88d-merged.mount: Deactivated successfully.
Jan 29 09:17:35 compute-0 python3.9[110422]: ansible-service_facts Invoked
Jan 29 09:17:35 compute-0 podman[110329]: 2026-01-29 09:17:35.226833161 +0000 UTC m=+0.417959850 container remove 1f4aa62fa31eb9591b93d2256eb29d4216eb5c723a8cf0d9ba237d388611686e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hamilton, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 29 09:17:35 compute-0 systemd[1]: libpod-conmon-1f4aa62fa31eb9591b93d2256eb29d4216eb5c723a8cf0d9ba237d388611686e.scope: Deactivated successfully.
Jan 29 09:17:35 compute-0 network[110458]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 29 09:17:35 compute-0 network[110459]: 'network-scripts' will be removed from distribution in near future.
Jan 29 09:17:35 compute-0 network[110460]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 29 09:17:35 compute-0 podman[110471]: 2026-01-29 09:17:35.346001583 +0000 UTC m=+0.026697297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:17:35 compute-0 podman[110471]: 2026-01-29 09:17:35.457660027 +0000 UTC m=+0.138355721 container create 0c8d2e9cb6b75d4ff0de50bdb48242d36f1b29e45fd3ed327e3eac978d0f12e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_ellis, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 09:17:35 compute-0 systemd[1]: Started libpod-conmon-0c8d2e9cb6b75d4ff0de50bdb48242d36f1b29e45fd3ed327e3eac978d0f12e8.scope.
Jan 29 09:17:35 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:17:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1645a8b19c55db694b104b2ceeb73894ad5e769a0a707a3e79bc68c40fafacf9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:17:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1645a8b19c55db694b104b2ceeb73894ad5e769a0a707a3e79bc68c40fafacf9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:17:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1645a8b19c55db694b104b2ceeb73894ad5e769a0a707a3e79bc68c40fafacf9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:17:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1645a8b19c55db694b104b2ceeb73894ad5e769a0a707a3e79bc68c40fafacf9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:17:35 compute-0 podman[110471]: 2026-01-29 09:17:35.83939404 +0000 UTC m=+0.520089754 container init 0c8d2e9cb6b75d4ff0de50bdb48242d36f1b29e45fd3ed327e3eac978d0f12e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 29 09:17:35 compute-0 podman[110471]: 2026-01-29 09:17:35.847355789 +0000 UTC m=+0.528051483 container start 0c8d2e9cb6b75d4ff0de50bdb48242d36f1b29e45fd3ed327e3eac978d0f12e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:17:35 compute-0 podman[110471]: 2026-01-29 09:17:35.851246356 +0000 UTC m=+0.531942070 container attach 0c8d2e9cb6b75d4ff0de50bdb48242d36f1b29e45fd3ed327e3eac978d0f12e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_ellis, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:17:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v207: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:36 compute-0 adoring_ellis[110488]: {
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:     "0": [
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:         {
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "devices": [
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "/dev/loop3"
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             ],
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "lv_name": "ceph_lv0",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "lv_size": "21470642176",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "name": "ceph_lv0",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "tags": {
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.cluster_name": "ceph",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.crush_device_class": "",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.encrypted": "0",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.objectstore": "bluestore",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.osd_id": "0",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.type": "block",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.vdo": "0",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.with_tpm": "0"
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             },
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "type": "block",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "vg_name": "ceph_vg0"
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:         }
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:     ],
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:     "1": [
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:         {
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "devices": [
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "/dev/loop4"
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             ],
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "lv_name": "ceph_lv1",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "lv_size": "21470642176",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "name": "ceph_lv1",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "tags": {
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.cluster_name": "ceph",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.crush_device_class": "",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.encrypted": "0",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.objectstore": "bluestore",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.osd_id": "1",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.type": "block",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.vdo": "0",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.with_tpm": "0"
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             },
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "type": "block",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "vg_name": "ceph_vg1"
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:         }
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:     ],
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:     "2": [
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:         {
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "devices": [
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "/dev/loop5"
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             ],
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "lv_name": "ceph_lv2",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "lv_size": "21470642176",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "name": "ceph_lv2",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "tags": {
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.cluster_name": "ceph",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.crush_device_class": "",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.encrypted": "0",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.objectstore": "bluestore",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.osd_id": "2",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.type": "block",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.vdo": "0",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:                 "ceph.with_tpm": "0"
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             },
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "type": "block",
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:             "vg_name": "ceph_vg2"
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:         }
Jan 29 09:17:36 compute-0 adoring_ellis[110488]:     ]
Jan 29 09:17:36 compute-0 adoring_ellis[110488]: }
Jan 29 09:17:36 compute-0 podman[110471]: 2026-01-29 09:17:36.162417345 +0000 UTC m=+0.843113049 container died 0c8d2e9cb6b75d4ff0de50bdb48242d36f1b29e45fd3ed327e3eac978d0f12e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_ellis, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 29 09:17:36 compute-0 systemd[1]: libpod-0c8d2e9cb6b75d4ff0de50bdb48242d36f1b29e45fd3ed327e3eac978d0f12e8.scope: Deactivated successfully.
Jan 29 09:17:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-1645a8b19c55db694b104b2ceeb73894ad5e769a0a707a3e79bc68c40fafacf9-merged.mount: Deactivated successfully.
Jan 29 09:17:36 compute-0 podman[110471]: 2026-01-29 09:17:36.222605713 +0000 UTC m=+0.903301407 container remove 0c8d2e9cb6b75d4ff0de50bdb48242d36f1b29e45fd3ed327e3eac978d0f12e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_ellis, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Jan 29 09:17:36 compute-0 systemd[1]: libpod-conmon-0c8d2e9cb6b75d4ff0de50bdb48242d36f1b29e45fd3ed327e3eac978d0f12e8.scope: Deactivated successfully.
Jan 29 09:17:36 compute-0 ceph-mon[75183]: pgmap v207: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:36 compute-0 sudo[110240]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:36 compute-0 sudo[110534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:17:36 compute-0 sudo[110534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:17:36 compute-0 sudo[110534]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:36 compute-0 sudo[110563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:17:36 compute-0 sudo[110563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:17:36 compute-0 podman[110614]: 2026-01-29 09:17:36.690766815 +0000 UTC m=+0.057679269 container create 55738a5ab6c6b2f0066643f5f69c689cafb0bcb045a662a949c7a9dbb7db33c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bose, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:17:36 compute-0 podman[110614]: 2026-01-29 09:17:36.656413289 +0000 UTC m=+0.023325773 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:17:36 compute-0 systemd[1]: Started libpod-conmon-55738a5ab6c6b2f0066643f5f69c689cafb0bcb045a662a949c7a9dbb7db33c8.scope.
Jan 29 09:17:36 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:17:36 compute-0 podman[110614]: 2026-01-29 09:17:36.865691212 +0000 UTC m=+0.232603686 container init 55738a5ab6c6b2f0066643f5f69c689cafb0bcb045a662a949c7a9dbb7db33c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 29 09:17:36 compute-0 podman[110614]: 2026-01-29 09:17:36.872756847 +0000 UTC m=+0.239669301 container start 55738a5ab6c6b2f0066643f5f69c689cafb0bcb045a662a949c7a9dbb7db33c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bose, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 29 09:17:36 compute-0 adoring_bose[110639]: 167 167
Jan 29 09:17:36 compute-0 systemd[1]: libpod-55738a5ab6c6b2f0066643f5f69c689cafb0bcb045a662a949c7a9dbb7db33c8.scope: Deactivated successfully.
Jan 29 09:17:36 compute-0 podman[110614]: 2026-01-29 09:17:36.909724515 +0000 UTC m=+0.276636969 container attach 55738a5ab6c6b2f0066643f5f69c689cafb0bcb045a662a949c7a9dbb7db33c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bose, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:17:36 compute-0 podman[110614]: 2026-01-29 09:17:36.910197028 +0000 UTC m=+0.277109482 container died 55738a5ab6c6b2f0066643f5f69c689cafb0bcb045a662a949c7a9dbb7db33c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bose, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:17:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cf78d00b301eb998b0707c71939716cb43955a2671be4815e4bc41307b71095-merged.mount: Deactivated successfully.
Jan 29 09:17:37 compute-0 podman[110614]: 2026-01-29 09:17:37.057750951 +0000 UTC m=+0.424663405 container remove 55738a5ab6c6b2f0066643f5f69c689cafb0bcb045a662a949c7a9dbb7db33c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 29 09:17:37 compute-0 systemd[1]: libpod-conmon-55738a5ab6c6b2f0066643f5f69c689cafb0bcb045a662a949c7a9dbb7db33c8.scope: Deactivated successfully.
Jan 29 09:17:37 compute-0 podman[110685]: 2026-01-29 09:17:37.22692607 +0000 UTC m=+0.066642486 container create cc84ab7f5393d28ee64c7a9f28f24f0bd21f0d730862a826b11a474622db0770 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 29 09:17:37 compute-0 systemd[1]: Started libpod-conmon-cc84ab7f5393d28ee64c7a9f28f24f0bd21f0d730862a826b11a474622db0770.scope.
Jan 29 09:17:37 compute-0 podman[110685]: 2026-01-29 09:17:37.181747416 +0000 UTC m=+0.021463862 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:17:37 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf9497ce4637cd902cc1115e836ffbc915126b3482fd299a382f4d40ccc376c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf9497ce4637cd902cc1115e836ffbc915126b3482fd299a382f4d40ccc376c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf9497ce4637cd902cc1115e836ffbc915126b3482fd299a382f4d40ccc376c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf9497ce4637cd902cc1115e836ffbc915126b3482fd299a382f4d40ccc376c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:17:37 compute-0 podman[110685]: 2026-01-29 09:17:37.370062762 +0000 UTC m=+0.209779198 container init cc84ab7f5393d28ee64c7a9f28f24f0bd21f0d730862a826b11a474622db0770 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 29 09:17:37 compute-0 podman[110685]: 2026-01-29 09:17:37.377576209 +0000 UTC m=+0.217292625 container start cc84ab7f5393d28ee64c7a9f28f24f0bd21f0d730862a826b11a474622db0770 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_feistel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:17:37 compute-0 podman[110685]: 2026-01-29 09:17:37.385838996 +0000 UTC m=+0.225555412 container attach cc84ab7f5393d28ee64c7a9f28f24f0bd21f0d730862a826b11a474622db0770 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_feistel, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:17:37 compute-0 sudo[110416]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v208: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:38 compute-0 lvm[110832]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:17:38 compute-0 lvm[110833]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:17:38 compute-0 lvm[110832]: VG ceph_vg1 finished
Jan 29 09:17:38 compute-0 lvm[110833]: VG ceph_vg0 finished
Jan 29 09:17:38 compute-0 lvm[110835]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:17:38 compute-0 lvm[110835]: VG ceph_vg2 finished
Jan 29 09:17:38 compute-0 interesting_feistel[110707]: {}
Jan 29 09:17:38 compute-0 systemd[1]: libpod-cc84ab7f5393d28ee64c7a9f28f24f0bd21f0d730862a826b11a474622db0770.scope: Deactivated successfully.
Jan 29 09:17:38 compute-0 podman[110685]: 2026-01-29 09:17:38.266368845 +0000 UTC m=+1.106085291 container died cc84ab7f5393d28ee64c7a9f28f24f0bd21f0d730862a826b11a474622db0770 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_feistel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 09:17:38 compute-0 systemd[1]: libpod-cc84ab7f5393d28ee64c7a9f28f24f0bd21f0d730862a826b11a474622db0770.scope: Consumed 1.351s CPU time.
Jan 29 09:17:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:17:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-acf9497ce4637cd902cc1115e836ffbc915126b3482fd299a382f4d40ccc376c-merged.mount: Deactivated successfully.
Jan 29 09:17:38 compute-0 podman[110685]: 2026-01-29 09:17:38.484926613 +0000 UTC m=+1.324643029 container remove cc84ab7f5393d28ee64c7a9f28f24f0bd21f0d730862a826b11a474622db0770 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:17:38 compute-0 systemd[1]: libpod-conmon-cc84ab7f5393d28ee64c7a9f28f24f0bd21f0d730862a826b11a474622db0770.scope: Deactivated successfully.
Jan 29 09:17:38 compute-0 sudo[110996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-devwuqhdybazpotxuijnttowtytmgwnq ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769678258.240577-193-264749674431592/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769678258.240577-193-264749674431592/args'
Jan 29 09:17:38 compute-0 sudo[110996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:38 compute-0 sudo[110563]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:17:38 compute-0 sudo[110996]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:38 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:17:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:17:38 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:17:38 compute-0 sudo[111038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:17:38 compute-0 sudo[111038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:17:38 compute-0 sudo[111038]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:39 compute-0 sudo[111188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azyglwpwjrnjoippqwmzilriwwnvhcru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678258.8051028-204-191665532893246/AnsiballZ_dnf.py'
Jan 29 09:17:39 compute-0 sudo[111188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:39 compute-0 ceph-mon[75183]: pgmap v208: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:39 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:17:39 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:17:39 compute-0 python3.9[111190]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:17:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v209: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:40 compute-0 sudo[111188]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:41 compute-0 ceph-mon[75183]: pgmap v209: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:41 compute-0 sudo[111341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrgyyqjmrjylzffansngntysrulcnzqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678261.1504714-217-92349210489954/AnsiballZ_package_facts.py'
Jan 29 09:17:41 compute-0 sudo[111341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v210: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:42 compute-0 python3.9[111343]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 29 09:17:42 compute-0 sudo[111341]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:42 compute-0 sudo[111493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fowyozcowpnmrrnxizdtueslbcrgxeal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678262.6739857-227-40454486480201/AnsiballZ_stat.py'
Jan 29 09:17:42 compute-0 sudo[111493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:43 compute-0 python3.9[111495]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:17:43 compute-0 ceph-mon[75183]: pgmap v210: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:43 compute-0 sudo[111493]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:43 compute-0 sudo[111571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qckvgyzluqirpavhxutjgyqjomciudja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678262.6739857-227-40454486480201/AnsiballZ_file.py'
Jan 29 09:17:43 compute-0 sudo[111571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:17:43 compute-0 python3.9[111573]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:17:43 compute-0 sudo[111571]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v211: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:44 compute-0 sudo[111723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmwyplcqvtychimnpyddjxiyjpbbgekm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678263.802913-239-78262861708900/AnsiballZ_stat.py'
Jan 29 09:17:44 compute-0 sudo[111723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:44 compute-0 python3.9[111725]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:17:44 compute-0 sudo[111723]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:44 compute-0 sudo[111801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avgkamkgkzwusmudirhwhuyaqmhaxtzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678263.802913-239-78262861708900/AnsiballZ_file.py'
Jan 29 09:17:44 compute-0 sudo[111801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:44 compute-0 python3.9[111803]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:17:44 compute-0 sudo[111801]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:45 compute-0 ceph-mon[75183]: pgmap v211: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:45 compute-0 sudo[111953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abzbcfqiyilfkangyfuywwdabimbwfdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678265.1663163-257-25889736122757/AnsiballZ_lineinfile.py'
Jan 29 09:17:45 compute-0 sudo[111953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:45 compute-0 python3.9[111955]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:17:45 compute-0 sudo[111953]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v212: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:46 compute-0 ceph-mon[75183]: pgmap v212: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:46 compute-0 sudo[112105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxswqqigzoewrxubtdmfrouffbiiesae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678266.305857-272-115434166628251/AnsiballZ_setup.py'
Jan 29 09:17:46 compute-0 sudo[112105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:46 compute-0 python3.9[112107]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 09:17:47 compute-0 sudo[112105]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:47 compute-0 sudo[112189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iirrdbnvsqidekonifoypftfcgdnwjqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678266.305857-272-115434166628251/AnsiballZ_systemd.py'
Jan 29 09:17:47 compute-0 sudo[112189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:48 compute-0 python3.9[112191]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:17:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v213: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:48 compute-0 sudo[112189]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:17:48 compute-0 sshd-session[107070]: Connection closed by 192.168.122.30 port 48740
Jan 29 09:17:48 compute-0 sshd-session[107067]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:17:48 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Jan 29 09:17:48 compute-0 systemd[1]: session-37.scope: Consumed 22.779s CPU time.
Jan 29 09:17:48 compute-0 systemd-logind[799]: Session 37 logged out. Waiting for processes to exit.
Jan 29 09:17:48 compute-0 systemd-logind[799]: Removed session 37.
Jan 29 09:17:49 compute-0 ceph-mon[75183]: pgmap v213: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v214: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:51 compute-0 ceph-mon[75183]: pgmap v214: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v215: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:53 compute-0 ceph-mon[75183]: pgmap v215: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:17:53 compute-0 sshd-session[112218]: Accepted publickey for zuul from 192.168.122.30 port 46652 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:17:53 compute-0 systemd-logind[799]: New session 38 of user zuul.
Jan 29 09:17:53 compute-0 systemd[1]: Started Session 38 of User zuul.
Jan 29 09:17:53 compute-0 sshd-session[112218]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:17:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v216: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:54 compute-0 sudo[112371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoniejqsddynrnudtylrvyqyrvnvxbtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678273.9846897-17-68346255829667/AnsiballZ_file.py'
Jan 29 09:17:54 compute-0 sudo[112371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:54 compute-0 python3.9[112373]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:17:54 compute-0 sudo[112371]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:55 compute-0 ceph-mon[75183]: pgmap v216: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:55 compute-0 sudo[112523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdsdwbeaxhvytaaopitpzxmcrnchyzct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678274.7677796-29-16603736815839/AnsiballZ_stat.py'
Jan 29 09:17:55 compute-0 sudo[112523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:55 compute-0 python3.9[112525]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:17:55 compute-0 sudo[112523]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:55 compute-0 sudo[112601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jflmxzjkhvoqcobaernadsxqafgybbqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678274.7677796-29-16603736815839/AnsiballZ_file.py'
Jan 29 09:17:55 compute-0 sudo[112601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:17:55 compute-0 python3.9[112603]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:17:55 compute-0 sudo[112601]: pam_unix(sudo:session): session closed for user root
Jan 29 09:17:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:17:55
Jan 29 09:17:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:17:55 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:17:55 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['images', '.mgr', 'backups', 'vms', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Jan 29 09:17:55 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:17:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v217: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:56 compute-0 sshd-session[112221]: Connection closed by 192.168.122.30 port 46652
Jan 29 09:17:56 compute-0 sshd-session[112218]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:17:56 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Jan 29 09:17:56 compute-0 systemd[1]: session-38.scope: Consumed 1.414s CPU time.
Jan 29 09:17:56 compute-0 systemd-logind[799]: Session 38 logged out. Waiting for processes to exit.
Jan 29 09:17:56 compute-0 systemd-logind[799]: Removed session 38.
Jan 29 09:17:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:17:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:17:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:17:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:17:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:17:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:17:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:17:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:17:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:17:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:17:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:17:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:17:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:17:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:17:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:17:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:17:57 compute-0 ceph-mon[75183]: pgmap v217: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v218: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:58 compute-0 ceph-mon[75183]: pgmap v218: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:17:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:18:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v219: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:01 compute-0 ceph-mon[75183]: pgmap v219: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:18:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:18:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:18:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:18:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:18:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:18:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:18:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:18:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:18:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:18:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:18:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:18:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0578630957479565e-06 of space, bias 4.0, pg target 0.0012694357148975478 quantized to 16 (current 32)
Jan 29 09:18:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:18:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:18:01 compute-0 sshd-session[112628]: Accepted publickey for zuul from 192.168.122.30 port 56154 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:18:01 compute-0 systemd-logind[799]: New session 39 of user zuul.
Jan 29 09:18:01 compute-0 systemd[1]: Started Session 39 of User zuul.
Jan 29 09:18:01 compute-0 sshd-session[112628]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:18:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v220: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:02 compute-0 ceph-mon[75183]: pgmap v220: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:02 compute-0 python3.9[112781]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:18:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:18:03 compute-0 sudo[112935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeaeerpjfonyfjkvnfguuarxweruwjlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678283.2014313-28-5884214343733/AnsiballZ_file.py'
Jan 29 09:18:03 compute-0 sudo[112935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:03 compute-0 python3.9[112937]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:03 compute-0 sudo[112935]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v221: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:04 compute-0 sudo[113110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imvymryohqjulxhkgzlyeuvofvquilst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678283.969764-36-185167123379124/AnsiballZ_stat.py'
Jan 29 09:18:04 compute-0 sudo[113110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:04 compute-0 python3.9[113112]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:18:04 compute-0 sudo[113110]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:05 compute-0 sudo[113188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rorqefiqdtiurcombawxblskfqswdbgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678283.969764-36-185167123379124/AnsiballZ_file.py'
Jan 29 09:18:05 compute-0 sudo[113188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:05 compute-0 ceph-mon[75183]: pgmap v221: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:05 compute-0 python3.9[113190]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.p627z8es recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:05 compute-0 sudo[113188]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:05 compute-0 sudo[113340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caglaukcmbudzjkwdeewitgujabcokwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678285.565812-56-30878289821866/AnsiballZ_stat.py'
Jan 29 09:18:05 compute-0 sudo[113340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:05 compute-0 python3.9[113342]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:18:06 compute-0 sudo[113340]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v222: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:06 compute-0 sudo[113418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbmslrunxtcmsthmpmnafxkrsvyepkcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678285.565812-56-30878289821866/AnsiballZ_file.py'
Jan 29 09:18:06 compute-0 sudo[113418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:06 compute-0 python3.9[113420]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.s6dl_w3l recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:06 compute-0 sudo[113418]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:06 compute-0 sudo[113570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfthcjvsvgojlhpfyedopviynkiuircb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678286.5939443-69-131343882457323/AnsiballZ_file.py'
Jan 29 09:18:06 compute-0 sudo[113570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:07 compute-0 python3.9[113572]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:18:07 compute-0 sudo[113570]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:07 compute-0 ceph-mon[75183]: pgmap v222: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:07 compute-0 sudo[113722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsgaytmnzqhgurrgbsxvujeqlaoyniaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678287.1890283-77-261955542156796/AnsiballZ_stat.py'
Jan 29 09:18:07 compute-0 sudo[113722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:07 compute-0 python3.9[113724]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:18:07 compute-0 sudo[113722]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:07 compute-0 sudo[113800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptthilvtfltgtgxqcwhgrmgzlrgaryea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678287.1890283-77-261955542156796/AnsiballZ_file.py'
Jan 29 09:18:07 compute-0 sudo[113800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:08 compute-0 python3.9[113802]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:18:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v223: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:08 compute-0 sudo[113800]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:08 compute-0 sudo[113952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xykfdtnsnkrhudaqjhxkmeefjjtbwqbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678288.1851506-77-13571943926113/AnsiballZ_stat.py'
Jan 29 09:18:08 compute-0 sudo[113952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:18:08 compute-0 python3.9[113954]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:18:08 compute-0 sudo[113952]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:08 compute-0 sudo[114030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmdrqdrbcbopzlpnumtwblsmvaqemwfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678288.1851506-77-13571943926113/AnsiballZ_file.py'
Jan 29 09:18:08 compute-0 sudo[114030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:08 compute-0 python3.9[114032]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:18:08 compute-0 sudo[114030]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:09 compute-0 ceph-mon[75183]: pgmap v223: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:09 compute-0 sudo[114182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfvsfyrsnklfstkiynnrxxfxlznhlcmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678289.1231897-100-6166451717234/AnsiballZ_file.py'
Jan 29 09:18:09 compute-0 sudo[114182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:09 compute-0 python3.9[114184]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:09 compute-0 sudo[114182]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:09 compute-0 sudo[114334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfeqjfwsgqwmnoeiulzpykzrljhjtaoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678289.6970317-108-176213015250636/AnsiballZ_stat.py'
Jan 29 09:18:09 compute-0 sudo[114334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v224: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:10 compute-0 python3.9[114336]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:18:10 compute-0 sudo[114334]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:10 compute-0 sudo[114412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxillhnoffnyeatyyembgcubsgwsskzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678289.6970317-108-176213015250636/AnsiballZ_file.py'
Jan 29 09:18:10 compute-0 sudo[114412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:10 compute-0 python3.9[114414]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:10 compute-0 sudo[114412]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:10 compute-0 sudo[114564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xusiueswllzxxldcyfhgcjgpagwxxqkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678290.707308-120-234806636335926/AnsiballZ_stat.py'
Jan 29 09:18:10 compute-0 sudo[114564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:11 compute-0 python3.9[114566]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:18:11 compute-0 sudo[114564]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:11 compute-0 ceph-mon[75183]: pgmap v224: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:11 compute-0 sudo[114642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sthedmapyqantplufbjrbthteqjwwaij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678290.707308-120-234806636335926/AnsiballZ_file.py'
Jan 29 09:18:11 compute-0 sudo[114642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:11 compute-0 python3.9[114644]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:11 compute-0 sudo[114642]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v225: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:12 compute-0 ceph-mon[75183]: pgmap v225: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:12 compute-0 sudo[114794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltfhlrzcbrbayrfnkxwfkgmrfzcldgpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678291.724924-132-153181738933581/AnsiballZ_systemd.py'
Jan 29 09:18:12 compute-0 sudo[114794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:12 compute-0 python3.9[114796]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:18:12 compute-0 systemd[1]: Reloading.
Jan 29 09:18:12 compute-0 systemd-sysv-generator[114828]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:18:12 compute-0 systemd-rc-local-generator[114825]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:18:12 compute-0 sudo[114794]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:13 compute-0 sudo[114984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkrjncuagjklejabxqmyvbksojmjpqnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678293.0594637-140-62488393092206/AnsiballZ_stat.py'
Jan 29 09:18:13 compute-0 sudo[114984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:18:13 compute-0 python3.9[114986]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:18:13 compute-0 sudo[114984]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:13 compute-0 sudo[115062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awzhjyhdmurcpqiisgoymylzolusyybk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678293.0594637-140-62488393092206/AnsiballZ_file.py'
Jan 29 09:18:13 compute-0 sudo[115062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:13 compute-0 python3.9[115064]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:13 compute-0 sudo[115062]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v226: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:14 compute-0 sudo[115214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjnamygznhmexwfkwjedwbmnuddsgbwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678294.083472-152-96029515623203/AnsiballZ_stat.py'
Jan 29 09:18:14 compute-0 sudo[115214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:14 compute-0 python3.9[115216]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:18:14 compute-0 sudo[115214]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:14 compute-0 sudo[115292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyxqlaltbybbomcakzrgdrvgylbrfhci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678294.083472-152-96029515623203/AnsiballZ_file.py'
Jan 29 09:18:14 compute-0 sudo[115292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:14 compute-0 python3.9[115294]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:14 compute-0 sudo[115292]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:15 compute-0 ceph-mon[75183]: pgmap v226: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:15 compute-0 sudo[115444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaqnucrpevmwijslbiiagzjpovvmeueu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678295.0658646-164-276695891971511/AnsiballZ_systemd.py'
Jan 29 09:18:15 compute-0 sudo[115444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:15 compute-0 python3.9[115446]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:18:15 compute-0 systemd[1]: Reloading.
Jan 29 09:18:15 compute-0 systemd-rc-local-generator[115471]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:18:15 compute-0 systemd-sysv-generator[115475]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:18:15 compute-0 systemd[1]: Starting Create netns directory...
Jan 29 09:18:15 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 29 09:18:15 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 29 09:18:15 compute-0 systemd[1]: Finished Create netns directory.
Jan 29 09:18:15 compute-0 sudo[115444]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v227: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:16 compute-0 python3.9[115637]: ansible-ansible.builtin.service_facts Invoked
Jan 29 09:18:16 compute-0 network[115654]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 29 09:18:16 compute-0 network[115655]: 'network-scripts' will be removed from distribution in near future.
Jan 29 09:18:16 compute-0 network[115656]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 29 09:18:17 compute-0 ceph-mon[75183]: pgmap v227: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v228: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:18:19 compute-0 ceph-mon[75183]: pgmap v228: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:19 compute-0 sudo[115916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpntdlnqwtayltwmumiqhdcqddtxvozf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678299.22548-190-125580560531050/AnsiballZ_stat.py'
Jan 29 09:18:19 compute-0 sudo[115916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:19 compute-0 python3.9[115918]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:18:19 compute-0 sudo[115916]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:19 compute-0 sudo[115994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avnuxnjtolezpzdtynxnkaybfrzibcws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678299.22548-190-125580560531050/AnsiballZ_file.py'
Jan 29 09:18:19 compute-0 sudo[115994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v229: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:20 compute-0 python3.9[115996]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:20 compute-0 sudo[115994]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:20 compute-0 sudo[116146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeovjsnlzjtquucolnohyfrjocfbxyqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678300.298011-203-43398625903037/AnsiballZ_file.py'
Jan 29 09:18:20 compute-0 sudo[116146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:20 compute-0 python3.9[116148]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:20 compute-0 sudo[116146]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:21 compute-0 ceph-mon[75183]: pgmap v229: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:21 compute-0 sudo[116298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drddafsoltficnqlpgvscvuefkzvkrtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678300.9287794-211-135132632791615/AnsiballZ_stat.py'
Jan 29 09:18:21 compute-0 sudo[116298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:21 compute-0 python3.9[116300]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:18:21 compute-0 sudo[116298]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:21 compute-0 sudo[116376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvaqobngcyvvlnrxkrpcgqzkehtofqqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678300.9287794-211-135132632791615/AnsiballZ_file.py'
Jan 29 09:18:21 compute-0 sudo[116376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:21 compute-0 python3.9[116378]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:21 compute-0 sudo[116376]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v230: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:22 compute-0 sudo[116528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cneccxtibwswfyhtzbdyxdskajxnyaxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678302.0161493-226-2027660926195/AnsiballZ_timezone.py'
Jan 29 09:18:22 compute-0 sudo[116528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:22 compute-0 python3.9[116530]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 29 09:18:22 compute-0 systemd[1]: Starting Time & Date Service...
Jan 29 09:18:22 compute-0 systemd[1]: Started Time & Date Service.
Jan 29 09:18:22 compute-0 sudo[116528]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:23 compute-0 ceph-mon[75183]: pgmap v230: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:23 compute-0 sudo[116684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mltzwqaqddvlfqghgyhvnjfxczkazotl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678302.973205-235-124719609104864/AnsiballZ_file.py'
Jan 29 09:18:23 compute-0 sudo[116684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:18:23 compute-0 python3.9[116686]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:23 compute-0 sudo[116684]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:23 compute-0 sudo[116836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjrkrqucrgbnutfqqiwasvzturreikyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678303.613342-243-153854521637977/AnsiballZ_stat.py'
Jan 29 09:18:23 compute-0 sudo[116836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:24 compute-0 python3.9[116838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:18:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v231: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:24 compute-0 sudo[116836]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:24 compute-0 sudo[116914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcchrdlvcvjzekluiymoptujjijtilta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678303.613342-243-153854521637977/AnsiballZ_file.py'
Jan 29 09:18:24 compute-0 sudo[116914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:24 compute-0 python3.9[116916]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:24 compute-0 sudo[116914]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:24 compute-0 sudo[117066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktzxgeybjgvdwgireyfsedtputbtgejo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678304.679327-255-15248063097907/AnsiballZ_stat.py'
Jan 29 09:18:24 compute-0 sudo[117066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:25 compute-0 python3.9[117068]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:18:25 compute-0 sudo[117066]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:25 compute-0 ceph-mon[75183]: pgmap v231: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:25 compute-0 sudo[117144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwcvfekvicgppnjfhikqlgmndxjcbbtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678304.679327-255-15248063097907/AnsiballZ_file.py'
Jan 29 09:18:25 compute-0 sudo[117144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:25 compute-0 python3.9[117146]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.jsru_129 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:25 compute-0 sudo[117144]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:25 compute-0 sudo[117296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egnzpxkfoanczxepquhaooiedmodsaty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678305.6614006-267-238428617107472/AnsiballZ_stat.py'
Jan 29 09:18:25 compute-0 sudo[117296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:26 compute-0 python3.9[117298]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:18:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v232: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:26 compute-0 sudo[117296]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:26 compute-0 ceph-mon[75183]: pgmap v232: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:26 compute-0 sudo[117374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzxunkfzukfxlewntssqzvbktmtxlbyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678305.6614006-267-238428617107472/AnsiballZ_file.py'
Jan 29 09:18:26 compute-0 sudo[117374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:26 compute-0 python3.9[117376]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:26 compute-0 sudo[117374]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:18:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:18:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:18:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:18:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:18:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:18:27 compute-0 sudo[117526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mviwysjrgmaqosscggrpdunotuohzprn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678306.7017193-280-122719114132132/AnsiballZ_command.py'
Jan 29 09:18:27 compute-0 sudo[117526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:27 compute-0 python3.9[117528]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:18:27 compute-0 sudo[117526]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:27 compute-0 sudo[117679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krsvshyramynxeyooqlfpqitmmcjrvxi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769678307.4988022-288-252628948132676/AnsiballZ_edpm_nftables_from_files.py'
Jan 29 09:18:27 compute-0 sudo[117679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v233: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:28 compute-0 python3[117681]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 29 09:18:28 compute-0 sudo[117679]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:18:28 compute-0 sudo[117831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipxpzhpwwaxatldnplzhckwgxnerzgij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678308.2527988-296-268171448178826/AnsiballZ_stat.py'
Jan 29 09:18:28 compute-0 sudo[117831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:28 compute-0 python3.9[117833]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:18:28 compute-0 sudo[117831]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:28 compute-0 sudo[117909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-givccyocenwbcjtukntfisqgrzsgtrtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678308.2527988-296-268171448178826/AnsiballZ_file.py'
Jan 29 09:18:28 compute-0 sudo[117909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:29 compute-0 python3.9[117911]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:29 compute-0 sudo[117909]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:29 compute-0 ceph-mon[75183]: pgmap v233: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:29 compute-0 sudo[118061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tymezdkjqjoddgkwmwmdvdkjhnozceou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678309.3881717-308-65699645018066/AnsiballZ_stat.py'
Jan 29 09:18:29 compute-0 sudo[118061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:29 compute-0 python3.9[118063]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:18:29 compute-0 sudo[118061]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v234: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:30 compute-0 ceph-mon[75183]: pgmap v234: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:30 compute-0 sudo[118186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxiqzzdainkkzedveevjhbytbngvkary ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678309.3881717-308-65699645018066/AnsiballZ_copy.py'
Jan 29 09:18:30 compute-0 sudo[118186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:30 compute-0 python3.9[118188]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678309.3881717-308-65699645018066/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:30 compute-0 sudo[118186]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:30 compute-0 sudo[118338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuufxldjedurfpamtpoqimkqnsixrucc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678310.6761382-323-148454771253928/AnsiballZ_stat.py'
Jan 29 09:18:30 compute-0 sudo[118338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:31 compute-0 python3.9[118340]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:18:31 compute-0 sudo[118338]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:31 compute-0 sudo[118416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyntezpfifolbzfqgtllvdcsekjlwvkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678310.6761382-323-148454771253928/AnsiballZ_file.py'
Jan 29 09:18:31 compute-0 sudo[118416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:31 compute-0 python3.9[118418]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:31 compute-0 sudo[118416]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:31 compute-0 sudo[118568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtggsxhqnpzjhnqozxpqrxfxmraezuqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678311.6706283-335-142474019086183/AnsiballZ_stat.py'
Jan 29 09:18:31 compute-0 sudo[118568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v235: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:32 compute-0 python3.9[118570]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:18:32 compute-0 sudo[118568]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:32 compute-0 sudo[118646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaogpbqdddbunbkdihrmwhhmtzxyhgku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678311.6706283-335-142474019086183/AnsiballZ_file.py'
Jan 29 09:18:32 compute-0 sudo[118646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:32 compute-0 python3.9[118648]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:32 compute-0 sudo[118646]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:33 compute-0 sudo[118798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjyqxbpfffjbuyitmodphfplcsgpjewq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678312.6986432-347-80944445273000/AnsiballZ_stat.py'
Jan 29 09:18:33 compute-0 sudo[118798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:33 compute-0 ceph-mon[75183]: pgmap v235: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:33 compute-0 python3.9[118800]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:18:33 compute-0 sudo[118798]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:18:33 compute-0 sudo[118876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibmdsbwrlswjitiicpstascurwoxeucu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678312.6986432-347-80944445273000/AnsiballZ_file.py'
Jan 29 09:18:33 compute-0 sudo[118876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:33 compute-0 python3.9[118878]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:33 compute-0 sudo[118876]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:34 compute-0 sudo[119028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asyruermlffaiebhwraxehwlrhamtwpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678313.7942102-360-222899772218015/AnsiballZ_command.py'
Jan 29 09:18:34 compute-0 sudo[119028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v236: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:34 compute-0 python3.9[119030]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:18:34 compute-0 sudo[119028]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:34 compute-0 sudo[119183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwqelauudfnrpbdhmrpkucupepwbeylo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678314.4106083-368-124624950321184/AnsiballZ_blockinfile.py'
Jan 29 09:18:34 compute-0 sudo[119183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:35 compute-0 python3.9[119185]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:35 compute-0 sudo[119183]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:35 compute-0 ceph-mon[75183]: pgmap v236: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:35 compute-0 sudo[119335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iayondlloiufvyjczujifkxhnaunqusl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678315.2517004-377-121831283007978/AnsiballZ_file.py'
Jan 29 09:18:35 compute-0 sudo[119335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:35 compute-0 python3.9[119337]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:35 compute-0 sudo[119335]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v237: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:36 compute-0 sudo[119487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bryqdnyxvhfcmbpvygibvmfnjlnuvaxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678315.857002-377-32775840948602/AnsiballZ_file.py'
Jan 29 09:18:36 compute-0 sudo[119487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:36 compute-0 python3.9[119489]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:36 compute-0 sudo[119487]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:36 compute-0 sudo[119639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amxfwrpzxhpuejhafecqztvsggdwzvcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678316.4521685-392-103642844757507/AnsiballZ_mount.py'
Jan 29 09:18:36 compute-0 sudo[119639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:37 compute-0 python3.9[119641]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 29 09:18:37 compute-0 sudo[119639]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:37 compute-0 ceph-mon[75183]: pgmap v237: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:37 compute-0 sudo[119791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdizsxjjgdocxchbpnekyywaeoaoscth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678317.2742782-392-224572661001629/AnsiballZ_mount.py'
Jan 29 09:18:37 compute-0 sudo[119791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:37 compute-0 python3.9[119793]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 29 09:18:37 compute-0 sudo[119791]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v238: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:38 compute-0 sshd-session[112631]: Connection closed by 192.168.122.30 port 56154
Jan 29 09:18:38 compute-0 sshd-session[112628]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:18:38 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Jan 29 09:18:38 compute-0 systemd[1]: session-39.scope: Consumed 26.252s CPU time.
Jan 29 09:18:38 compute-0 systemd-logind[799]: Session 39 logged out. Waiting for processes to exit.
Jan 29 09:18:38 compute-0 systemd-logind[799]: Removed session 39.
Jan 29 09:18:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:18:38 compute-0 sudo[119818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:18:38 compute-0 sudo[119818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:18:38 compute-0 sudo[119818]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:38 compute-0 sudo[119843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:18:38 compute-0 sudo[119843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:18:39 compute-0 ceph-mon[75183]: pgmap v238: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:39 compute-0 sudo[119843]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:18:39 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:18:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:18:39 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:18:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:18:39 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:18:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:18:39 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:18:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:18:39 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:18:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:18:39 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:18:39 compute-0 sudo[119900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:18:39 compute-0 sudo[119900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:18:39 compute-0 sudo[119900]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:39 compute-0 sudo[119925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:18:39 compute-0 sudo[119925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:18:39 compute-0 podman[119963]: 2026-01-29 09:18:39.719871473 +0000 UTC m=+0.044345942 container create 4cb7f0a785cc0505c0ca5fad9746e6565519a9a7695daa348a24c33c2b0465c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wing, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:18:39 compute-0 systemd[1]: Started libpod-conmon-4cb7f0a785cc0505c0ca5fad9746e6565519a9a7695daa348a24c33c2b0465c8.scope.
Jan 29 09:18:39 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:18:39 compute-0 podman[119963]: 2026-01-29 09:18:39.697306321 +0000 UTC m=+0.021780600 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:18:39 compute-0 podman[119963]: 2026-01-29 09:18:39.798763375 +0000 UTC m=+0.123237654 container init 4cb7f0a785cc0505c0ca5fad9746e6565519a9a7695daa348a24c33c2b0465c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 09:18:39 compute-0 podman[119963]: 2026-01-29 09:18:39.804792666 +0000 UTC m=+0.129266925 container start 4cb7f0a785cc0505c0ca5fad9746e6565519a9a7695daa348a24c33c2b0465c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 09:18:39 compute-0 podman[119963]: 2026-01-29 09:18:39.807925855 +0000 UTC m=+0.132400224 container attach 4cb7f0a785cc0505c0ca5fad9746e6565519a9a7695daa348a24c33c2b0465c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 09:18:39 compute-0 pedantic_wing[119979]: 167 167
Jan 29 09:18:39 compute-0 systemd[1]: libpod-4cb7f0a785cc0505c0ca5fad9746e6565519a9a7695daa348a24c33c2b0465c8.scope: Deactivated successfully.
Jan 29 09:18:39 compute-0 conmon[119979]: conmon 4cb7f0a785cc0505c0ca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4cb7f0a785cc0505c0ca5fad9746e6565519a9a7695daa348a24c33c2b0465c8.scope/container/memory.events
Jan 29 09:18:39 compute-0 podman[119963]: 2026-01-29 09:18:39.81093045 +0000 UTC m=+0.135404709 container died 4cb7f0a785cc0505c0ca5fad9746e6565519a9a7695daa348a24c33c2b0465c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wing, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:18:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba764d29d2cd00484f6ecfca963dbfa25ecac18c27ab4f71cfa24abcfb7317ff-merged.mount: Deactivated successfully.
Jan 29 09:18:39 compute-0 podman[119963]: 2026-01-29 09:18:39.847612383 +0000 UTC m=+0.172086642 container remove 4cb7f0a785cc0505c0ca5fad9746e6565519a9a7695daa348a24c33c2b0465c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wing, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:18:39 compute-0 systemd[1]: libpod-conmon-4cb7f0a785cc0505c0ca5fad9746e6565519a9a7695daa348a24c33c2b0465c8.scope: Deactivated successfully.
Jan 29 09:18:39 compute-0 podman[120002]: 2026-01-29 09:18:39.973335966 +0000 UTC m=+0.045914226 container create f351eae51a8a961f05d2c7517d5eca0127916d6271bce6e8742b059dddba7321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:18:40 compute-0 systemd[1]: Started libpod-conmon-f351eae51a8a961f05d2c7517d5eca0127916d6271bce6e8742b059dddba7321.scope.
Jan 29 09:18:40 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:18:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f242f988743bd6879e661a45a812e7d34ccae11ccab19d9be3ea6e7e6daa0517/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:18:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f242f988743bd6879e661a45a812e7d34ccae11ccab19d9be3ea6e7e6daa0517/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:18:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f242f988743bd6879e661a45a812e7d34ccae11ccab19d9be3ea6e7e6daa0517/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:18:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f242f988743bd6879e661a45a812e7d34ccae11ccab19d9be3ea6e7e6daa0517/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:18:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f242f988743bd6879e661a45a812e7d34ccae11ccab19d9be3ea6e7e6daa0517/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:18:40 compute-0 podman[120002]: 2026-01-29 09:18:39.952075822 +0000 UTC m=+0.024654172 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:18:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v239: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:40 compute-0 podman[120002]: 2026-01-29 09:18:40.117638177 +0000 UTC m=+0.190216467 container init f351eae51a8a961f05d2c7517d5eca0127916d6271bce6e8742b059dddba7321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:18:40 compute-0 podman[120002]: 2026-01-29 09:18:40.12302876 +0000 UTC m=+0.195607020 container start f351eae51a8a961f05d2c7517d5eca0127916d6271bce6e8742b059dddba7321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 09:18:40 compute-0 podman[120002]: 2026-01-29 09:18:40.146016524 +0000 UTC m=+0.218594814 container attach f351eae51a8a961f05d2c7517d5eca0127916d6271bce6e8742b059dddba7321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 29 09:18:40 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:18:40 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:18:40 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:18:40 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:18:40 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:18:40 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:18:40 compute-0 reverent_hopper[120018]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:18:40 compute-0 reverent_hopper[120018]: --> All data devices are unavailable
Jan 29 09:18:40 compute-0 systemd[1]: libpod-f351eae51a8a961f05d2c7517d5eca0127916d6271bce6e8742b059dddba7321.scope: Deactivated successfully.
Jan 29 09:18:40 compute-0 podman[120002]: 2026-01-29 09:18:40.547350359 +0000 UTC m=+0.619928619 container died f351eae51a8a961f05d2c7517d5eca0127916d6271bce6e8742b059dddba7321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 09:18:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f242f988743bd6879e661a45a812e7d34ccae11ccab19d9be3ea6e7e6daa0517-merged.mount: Deactivated successfully.
Jan 29 09:18:40 compute-0 podman[120002]: 2026-01-29 09:18:40.776080139 +0000 UTC m=+0.848658409 container remove f351eae51a8a961f05d2c7517d5eca0127916d6271bce6e8742b059dddba7321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:18:40 compute-0 systemd[1]: libpod-conmon-f351eae51a8a961f05d2c7517d5eca0127916d6271bce6e8742b059dddba7321.scope: Deactivated successfully.
Jan 29 09:18:40 compute-0 sudo[119925]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:40 compute-0 sudo[120052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:18:40 compute-0 sudo[120052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:18:40 compute-0 sudo[120052]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:40 compute-0 sudo[120077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:18:40 compute-0 sudo[120077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:18:41 compute-0 ceph-mon[75183]: pgmap v239: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:41 compute-0 podman[120114]: 2026-01-29 09:18:41.307227284 +0000 UTC m=+0.070751152 container create 235a82977621c5be029816cb211a8ce80de5616822b0229dbfaa8670aa745549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 29 09:18:41 compute-0 podman[120114]: 2026-01-29 09:18:41.260279429 +0000 UTC m=+0.023803327 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:18:41 compute-0 systemd[1]: Started libpod-conmon-235a82977621c5be029816cb211a8ce80de5616822b0229dbfaa8670aa745549.scope.
Jan 29 09:18:41 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:18:41 compute-0 podman[120114]: 2026-01-29 09:18:41.416640013 +0000 UTC m=+0.180163901 container init 235a82977621c5be029816cb211a8ce80de5616822b0229dbfaa8670aa745549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_antonelli, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 09:18:41 compute-0 podman[120114]: 2026-01-29 09:18:41.424015193 +0000 UTC m=+0.187539091 container start 235a82977621c5be029816cb211a8ce80de5616822b0229dbfaa8670aa745549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_antonelli, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 29 09:18:41 compute-0 blissful_antonelli[120131]: 167 167
Jan 29 09:18:41 compute-0 systemd[1]: libpod-235a82977621c5be029816cb211a8ce80de5616822b0229dbfaa8670aa745549.scope: Deactivated successfully.
Jan 29 09:18:41 compute-0 podman[120114]: 2026-01-29 09:18:41.449152767 +0000 UTC m=+0.212676635 container attach 235a82977621c5be029816cb211a8ce80de5616822b0229dbfaa8670aa745549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:18:41 compute-0 podman[120114]: 2026-01-29 09:18:41.451330999 +0000 UTC m=+0.214854957 container died 235a82977621c5be029816cb211a8ce80de5616822b0229dbfaa8670aa745549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_antonelli, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:18:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-27f1bedd0e20b98142abd6e87a85a350d18249ece6563aba8878c34ab02d017a-merged.mount: Deactivated successfully.
Jan 29 09:18:41 compute-0 podman[120114]: 2026-01-29 09:18:41.736984067 +0000 UTC m=+0.500507925 container remove 235a82977621c5be029816cb211a8ce80de5616822b0229dbfaa8670aa745549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 09:18:41 compute-0 systemd[1]: libpod-conmon-235a82977621c5be029816cb211a8ce80de5616822b0229dbfaa8670aa745549.scope: Deactivated successfully.
Jan 29 09:18:41 compute-0 podman[120155]: 2026-01-29 09:18:41.851236594 +0000 UTC m=+0.023411896 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:18:41 compute-0 podman[120155]: 2026-01-29 09:18:41.979494199 +0000 UTC m=+0.151669491 container create 59f289ab941287c6c514a51d1def2a8346dd7ac7d5fda85c3ae7fd4df5fe1155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:18:42 compute-0 systemd[1]: Started libpod-conmon-59f289ab941287c6c514a51d1def2a8346dd7ac7d5fda85c3ae7fd4df5fe1155.scope.
Jan 29 09:18:42 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f551fefa0d6ad07293a1919aa0f9817036400c36d1529cebbe1fc1de8a0da893/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f551fefa0d6ad07293a1919aa0f9817036400c36d1529cebbe1fc1de8a0da893/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f551fefa0d6ad07293a1919aa0f9817036400c36d1529cebbe1fc1de8a0da893/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f551fefa0d6ad07293a1919aa0f9817036400c36d1529cebbe1fc1de8a0da893/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:18:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v240: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:42 compute-0 podman[120155]: 2026-01-29 09:18:42.185992968 +0000 UTC m=+0.358168270 container init 59f289ab941287c6c514a51d1def2a8346dd7ac7d5fda85c3ae7fd4df5fe1155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:18:42 compute-0 podman[120155]: 2026-01-29 09:18:42.192510093 +0000 UTC m=+0.364685375 container start 59f289ab941287c6c514a51d1def2a8346dd7ac7d5fda85c3ae7fd4df5fe1155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_bell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 09:18:42 compute-0 podman[120155]: 2026-01-29 09:18:42.211367719 +0000 UTC m=+0.383543031 container attach 59f289ab941287c6c514a51d1def2a8346dd7ac7d5fda85c3ae7fd4df5fe1155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_bell, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 29 09:18:42 compute-0 ceph-mon[75183]: pgmap v240: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:42 compute-0 happy_bell[120172]: {
Jan 29 09:18:42 compute-0 happy_bell[120172]:     "0": [
Jan 29 09:18:42 compute-0 happy_bell[120172]:         {
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "devices": [
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "/dev/loop3"
Jan 29 09:18:42 compute-0 happy_bell[120172]:             ],
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "lv_name": "ceph_lv0",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "lv_size": "21470642176",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "name": "ceph_lv0",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "tags": {
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.cluster_name": "ceph",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.crush_device_class": "",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.encrypted": "0",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.objectstore": "bluestore",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.osd_id": "0",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.type": "block",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.vdo": "0",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.with_tpm": "0"
Jan 29 09:18:42 compute-0 happy_bell[120172]:             },
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "type": "block",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "vg_name": "ceph_vg0"
Jan 29 09:18:42 compute-0 happy_bell[120172]:         }
Jan 29 09:18:42 compute-0 happy_bell[120172]:     ],
Jan 29 09:18:42 compute-0 happy_bell[120172]:     "1": [
Jan 29 09:18:42 compute-0 happy_bell[120172]:         {
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "devices": [
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "/dev/loop4"
Jan 29 09:18:42 compute-0 happy_bell[120172]:             ],
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "lv_name": "ceph_lv1",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "lv_size": "21470642176",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "name": "ceph_lv1",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "tags": {
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.cluster_name": "ceph",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.crush_device_class": "",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.encrypted": "0",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.objectstore": "bluestore",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.osd_id": "1",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.type": "block",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.vdo": "0",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.with_tpm": "0"
Jan 29 09:18:42 compute-0 happy_bell[120172]:             },
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "type": "block",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "vg_name": "ceph_vg1"
Jan 29 09:18:42 compute-0 happy_bell[120172]:         }
Jan 29 09:18:42 compute-0 happy_bell[120172]:     ],
Jan 29 09:18:42 compute-0 happy_bell[120172]:     "2": [
Jan 29 09:18:42 compute-0 happy_bell[120172]:         {
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "devices": [
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "/dev/loop5"
Jan 29 09:18:42 compute-0 happy_bell[120172]:             ],
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "lv_name": "ceph_lv2",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "lv_size": "21470642176",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "name": "ceph_lv2",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "tags": {
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.cluster_name": "ceph",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.crush_device_class": "",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.encrypted": "0",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.objectstore": "bluestore",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.osd_id": "2",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.type": "block",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.vdo": "0",
Jan 29 09:18:42 compute-0 happy_bell[120172]:                 "ceph.with_tpm": "0"
Jan 29 09:18:42 compute-0 happy_bell[120172]:             },
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "type": "block",
Jan 29 09:18:42 compute-0 happy_bell[120172]:             "vg_name": "ceph_vg2"
Jan 29 09:18:42 compute-0 happy_bell[120172]:         }
Jan 29 09:18:42 compute-0 happy_bell[120172]:     ]
Jan 29 09:18:42 compute-0 happy_bell[120172]: }
Jan 29 09:18:42 compute-0 systemd[1]: libpod-59f289ab941287c6c514a51d1def2a8346dd7ac7d5fda85c3ae7fd4df5fe1155.scope: Deactivated successfully.
Jan 29 09:18:42 compute-0 podman[120155]: 2026-01-29 09:18:42.48868719 +0000 UTC m=+0.660862472 container died 59f289ab941287c6c514a51d1def2a8346dd7ac7d5fda85c3ae7fd4df5fe1155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_bell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 09:18:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f551fefa0d6ad07293a1919aa0f9817036400c36d1529cebbe1fc1de8a0da893-merged.mount: Deactivated successfully.
Jan 29 09:18:42 compute-0 podman[120155]: 2026-01-29 09:18:42.547681676 +0000 UTC m=+0.719856958 container remove 59f289ab941287c6c514a51d1def2a8346dd7ac7d5fda85c3ae7fd4df5fe1155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_bell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 29 09:18:42 compute-0 systemd[1]: libpod-conmon-59f289ab941287c6c514a51d1def2a8346dd7ac7d5fda85c3ae7fd4df5fe1155.scope: Deactivated successfully.
Jan 29 09:18:42 compute-0 sudo[120077]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:42 compute-0 sudo[120195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:18:42 compute-0 sudo[120195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:18:42 compute-0 sudo[120195]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:42 compute-0 sudo[120220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:18:42 compute-0 sudo[120220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:18:42 compute-0 podman[120258]: 2026-01-29 09:18:42.980263129 +0000 UTC m=+0.038224526 container create 8b3e49ee8c56eb59906e8f7c43505845d880e4fee8109ea614da1bf26c5c297d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 29 09:18:43 compute-0 systemd[1]: Started libpod-conmon-8b3e49ee8c56eb59906e8f7c43505845d880e4fee8109ea614da1bf26c5c297d.scope.
Jan 29 09:18:43 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:18:43 compute-0 podman[120258]: 2026-01-29 09:18:43.06087838 +0000 UTC m=+0.118839797 container init 8b3e49ee8c56eb59906e8f7c43505845d880e4fee8109ea614da1bf26c5c297d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 29 09:18:43 compute-0 podman[120258]: 2026-01-29 09:18:42.965583702 +0000 UTC m=+0.023545119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:18:43 compute-0 podman[120258]: 2026-01-29 09:18:43.068105475 +0000 UTC m=+0.126066872 container start 8b3e49ee8c56eb59906e8f7c43505845d880e4fee8109ea614da1bf26c5c297d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 29 09:18:43 compute-0 xenodochial_brown[120274]: 167 167
Jan 29 09:18:43 compute-0 systemd[1]: libpod-8b3e49ee8c56eb59906e8f7c43505845d880e4fee8109ea614da1bf26c5c297d.scope: Deactivated successfully.
Jan 29 09:18:43 compute-0 podman[120258]: 2026-01-29 09:18:43.073008555 +0000 UTC m=+0.130969962 container attach 8b3e49ee8c56eb59906e8f7c43505845d880e4fee8109ea614da1bf26c5c297d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_brown, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 29 09:18:43 compute-0 podman[120258]: 2026-01-29 09:18:43.074699073 +0000 UTC m=+0.132660480 container died 8b3e49ee8c56eb59906e8f7c43505845d880e4fee8109ea614da1bf26c5c297d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_brown, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:18:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cc1bcfbe23349e09637d9bbd64a09393bdd66e334a0d3f6612f65de430f864d-merged.mount: Deactivated successfully.
Jan 29 09:18:43 compute-0 podman[120258]: 2026-01-29 09:18:43.127579776 +0000 UTC m=+0.185541173 container remove 8b3e49ee8c56eb59906e8f7c43505845d880e4fee8109ea614da1bf26c5c297d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 29 09:18:43 compute-0 systemd[1]: libpod-conmon-8b3e49ee8c56eb59906e8f7c43505845d880e4fee8109ea614da1bf26c5c297d.scope: Deactivated successfully.
Jan 29 09:18:43 compute-0 sshd-session[120290]: Accepted publickey for zuul from 192.168.122.30 port 37600 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:18:43 compute-0 systemd-logind[799]: New session 40 of user zuul.
Jan 29 09:18:43 compute-0 systemd[1]: Started Session 40 of User zuul.
Jan 29 09:18:43 compute-0 sshd-session[120290]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:18:43 compute-0 podman[120299]: 2026-01-29 09:18:43.243008876 +0000 UTC m=+0.040890253 container create c3b71efeb0b80fc0721bf8566612a000f25785ee222586fe8b1327cba0a7bbba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_saha, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:18:43 compute-0 systemd[1]: Started libpod-conmon-c3b71efeb0b80fc0721bf8566612a000f25785ee222586fe8b1327cba0a7bbba.scope.
Jan 29 09:18:43 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/227fed643e799c6b33a51918ce6f2c0072a6bc9729a48b89e33b7f9fda0df242/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/227fed643e799c6b33a51918ce6f2c0072a6bc9729a48b89e33b7f9fda0df242/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/227fed643e799c6b33a51918ce6f2c0072a6bc9729a48b89e33b7f9fda0df242/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/227fed643e799c6b33a51918ce6f2c0072a6bc9729a48b89e33b7f9fda0df242/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:18:43 compute-0 podman[120299]: 2026-01-29 09:18:43.22767416 +0000 UTC m=+0.025555557 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:18:43 compute-0 podman[120299]: 2026-01-29 09:18:43.328695941 +0000 UTC m=+0.126577328 container init c3b71efeb0b80fc0721bf8566612a000f25785ee222586fe8b1327cba0a7bbba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 29 09:18:43 compute-0 podman[120299]: 2026-01-29 09:18:43.335498604 +0000 UTC m=+0.133379981 container start c3b71efeb0b80fc0721bf8566612a000f25785ee222586fe8b1327cba0a7bbba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_saha, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 09:18:43 compute-0 podman[120299]: 2026-01-29 09:18:43.340007843 +0000 UTC m=+0.137889230 container attach c3b71efeb0b80fc0721bf8566612a000f25785ee222586fe8b1327cba0a7bbba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 09:18:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:18:43 compute-0 sudo[120518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqhhjrgwqhspzjntwqxzyjwszrbwrxgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678323.3291185-16-260941487404997/AnsiballZ_tempfile.py'
Jan 29 09:18:43 compute-0 sudo[120518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:43 compute-0 python3.9[120525]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 29 09:18:43 compute-0 lvm[120546]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:18:43 compute-0 lvm[120546]: VG ceph_vg0 finished
Jan 29 09:18:44 compute-0 sudo[120518]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:44 compute-0 lvm[120549]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:18:44 compute-0 lvm[120549]: VG ceph_vg1 finished
Jan 29 09:18:44 compute-0 lvm[120554]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:18:44 compute-0 lvm[120554]: VG ceph_vg2 finished
Jan 29 09:18:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v241: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:44 compute-0 determined_saha[120315]: {}
Jan 29 09:18:44 compute-0 systemd[1]: libpod-c3b71efeb0b80fc0721bf8566612a000f25785ee222586fe8b1327cba0a7bbba.scope: Deactivated successfully.
Jan 29 09:18:44 compute-0 systemd[1]: libpod-c3b71efeb0b80fc0721bf8566612a000f25785ee222586fe8b1327cba0a7bbba.scope: Consumed 1.199s CPU time.
Jan 29 09:18:44 compute-0 podman[120299]: 2026-01-29 09:18:44.162856477 +0000 UTC m=+0.960737864 container died c3b71efeb0b80fc0721bf8566612a000f25785ee222586fe8b1327cba0a7bbba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_saha, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:18:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-227fed643e799c6b33a51918ce6f2c0072a6bc9729a48b89e33b7f9fda0df242-merged.mount: Deactivated successfully.
Jan 29 09:18:44 compute-0 podman[120299]: 2026-01-29 09:18:44.203929274 +0000 UTC m=+1.001810651 container remove c3b71efeb0b80fc0721bf8566612a000f25785ee222586fe8b1327cba0a7bbba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_saha, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:18:44 compute-0 systemd[1]: libpod-conmon-c3b71efeb0b80fc0721bf8566612a000f25785ee222586fe8b1327cba0a7bbba.scope: Deactivated successfully.
Jan 29 09:18:44 compute-0 sudo[120220]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:18:44 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:18:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:18:44 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:18:44 compute-0 sudo[120640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:18:44 compute-0 sudo[120640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:18:44 compute-0 sudo[120640]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:44 compute-0 sudo[120738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsmfrkfljockvajklskstsytmwgdbyyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678324.1388628-28-62644885509688/AnsiballZ_stat.py'
Jan 29 09:18:44 compute-0 sudo[120738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:44 compute-0 python3.9[120740]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:18:44 compute-0 sudo[120738]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:45 compute-0 ceph-mon[75183]: pgmap v241: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:18:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:18:45 compute-0 sudo[120892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilocylunozmtdmbpimhevbzhfhlopdyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678324.9138346-36-205634663247151/AnsiballZ_slurp.py'
Jan 29 09:18:45 compute-0 sudo[120892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:45 compute-0 python3.9[120894]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 29 09:18:45 compute-0 sudo[120892]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:45 compute-0 sudo[121044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvpwwazarxzpxqzocrrqpvejaetmbpgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678325.6152124-44-219271785296779/AnsiballZ_stat.py'
Jan 29 09:18:45 compute-0 sudo[121044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:46 compute-0 python3.9[121046]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.58fmhrqm follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:18:46 compute-0 sudo[121044]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v242: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:46 compute-0 sudo[121169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uchulwpicsikyuhubdteaghbyndxufxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678325.6152124-44-219271785296779/AnsiballZ_copy.py'
Jan 29 09:18:46 compute-0 sudo[121169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:46 compute-0 python3.9[121171]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.58fmhrqm mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769678325.6152124-44-219271785296779/.source.58fmhrqm _original_basename=.s_17ougt follow=False checksum=1db26c8cc09e555d1cc1fbc1029a0f705fa51c2f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:46 compute-0 sudo[121169]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:47 compute-0 ceph-mon[75183]: pgmap v242: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:47 compute-0 sudo[121321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmbdhtovpdiwkcyglzftngrhvsovyiwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678326.7979062-59-73114164935667/AnsiballZ_setup.py'
Jan 29 09:18:47 compute-0 sudo[121321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:47 compute-0 python3.9[121323]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:18:47 compute-0 sudo[121321]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v243: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:48 compute-0 sudo[121473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptessskifkazwnuppdjbsgzjixpzjzbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678327.8291306-68-81067233100478/AnsiballZ_blockinfile.py'
Jan 29 09:18:48 compute-0 sudo[121473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:48 compute-0 python3.9[121475]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjuxZIJ4X8jT6trr5CartPX+xSyv6a7KuzJBzvtqnlScCyccSTh2hIF/m/mxwqVM6xI10XvoL6FyLNojFtf+FnVMhM9rRwoM/m2Gk/dDGgxWGxqndd7e54BNHzwcErzCLYORDFNcMVFLfvlJjglTHabcYqcQ7D34yPyBImv2JZkIXcPKxlA0dSe92bqOt8Srjqd7eTDHrvD8Ucs09i0t3TrSIg2fzwxWs38gnD8rHvgibq1nm1pYFZFAVVpUWDbxqB1GogN1jls44gwyptQbvRRzW/8qslugFinSADjrdhgV9BN9TCkO/Fiae7Kw1ME3xFCYrgHDyEdHjo4SFt32SAMDeg5XBOP+2FoXB3YV3RUa8ctzxaAobE1LPb1hPsluGuQ180BCJYiou6hXDOw6VwSjI59Xd9PPV6voHtOV3hijs0tMHTigimaqnacysTk9yWeU4ZVosAQT2FMWZv6shG6zbGZewCLD7jGfDrdzdyxBquJ7GN/N5t+KjtqFI3Rr8=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINQy3RXTmDzw+Eiotj8TUZIiot4Z9D7DKW79i5sp1sRr
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCahJ3hcjzChG5NXhUimUXwcSbVxmfQH1zORvmedrE8Hzpp1mYh+ZP4/SqeWvSb00XQFfZNxpUdcWKLt9leH/n8=
                                              create=True mode=0644 path=/tmp/ansible.58fmhrqm state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:18:48 compute-0 sudo[121473]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:48 compute-0 sudo[121625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnagfsfwymojpsbrabpoquvajyanjhny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678328.5422523-76-196462689589856/AnsiballZ_command.py'
Jan 29 09:18:48 compute-0 sudo[121625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:49 compute-0 python3.9[121627]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.58fmhrqm' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:18:49 compute-0 sudo[121625]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:49 compute-0 ceph-mon[75183]: pgmap v243: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:49 compute-0 sudo[121779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjgjmtpjhoysiqaivrvuprjplgafexgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678329.2667272-84-212682956621862/AnsiballZ_file.py'
Jan 29 09:18:49 compute-0 sudo[121779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:49 compute-0 python3.9[121781]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.58fmhrqm state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:18:49 compute-0 sudo[121779]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v244: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:50 compute-0 sshd-session[120312]: Connection closed by 192.168.122.30 port 37600
Jan 29 09:18:50 compute-0 sshd-session[120290]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:18:50 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Jan 29 09:18:50 compute-0 systemd[1]: session-40.scope: Consumed 4.244s CPU time.
Jan 29 09:18:50 compute-0 systemd-logind[799]: Session 40 logged out. Waiting for processes to exit.
Jan 29 09:18:50 compute-0 systemd-logind[799]: Removed session 40.
Jan 29 09:18:51 compute-0 ceph-mon[75183]: pgmap v244: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v245: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:52 compute-0 ceph-mon[75183]: pgmap v245: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:52 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 29 09:18:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:18:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v246: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:55 compute-0 ceph-mon[75183]: pgmap v246: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:55 compute-0 sshd-session[121809]: Accepted publickey for zuul from 192.168.122.30 port 32924 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:18:55 compute-0 systemd-logind[799]: New session 41 of user zuul.
Jan 29 09:18:55 compute-0 systemd[1]: Started Session 41 of User zuul.
Jan 29 09:18:55 compute-0 sshd-session[121809]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:18:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:18:55
Jan 29 09:18:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:18:55 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:18:55 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'vms', 'images', 'cephfs.cephfs.meta', '.mgr', 'volumes']
Jan 29 09:18:55 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:18:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v247: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:56 compute-0 python3.9[121962]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:18:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:18:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:18:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:18:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:18:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:18:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:18:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:18:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:18:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:18:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:18:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:18:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:18:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:18:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:18:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:18:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:18:57 compute-0 sudo[122116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzhbckxwqbrmyvesysnbeglagefoucri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678336.5004694-27-39796777841503/AnsiballZ_systemd.py'
Jan 29 09:18:57 compute-0 sudo[122116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:57 compute-0 ceph-mon[75183]: pgmap v247: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:57 compute-0 python3.9[122118]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 29 09:18:57 compute-0 sudo[122116]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:57 compute-0 sudo[122270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-offtfwsjoxzqxsofsuqlbdmvbktulcni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678337.6071498-35-43443583618832/AnsiballZ_systemd.py'
Jan 29 09:18:57 compute-0 sudo[122270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v248: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:58 compute-0 python3.9[122272]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 09:18:58 compute-0 sudo[122270]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:18:58 compute-0 sudo[122423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndkebtkakmhmlfztjosvxyldonuytqko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678338.3392873-44-177742961030879/AnsiballZ_command.py'
Jan 29 09:18:58 compute-0 sudo[122423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:58 compute-0 python3.9[122425]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:18:59 compute-0 sudo[122423]: pam_unix(sudo:session): session closed for user root
Jan 29 09:18:59 compute-0 ceph-mon[75183]: pgmap v248: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:18:59 compute-0 sudo[122576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybhznqrxrghkzpjqfiyyadyvunjrvukf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678339.1422966-52-239590109422069/AnsiballZ_stat.py'
Jan 29 09:18:59 compute-0 sudo[122576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:18:59 compute-0 python3.9[122578]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:18:59 compute-0 sudo[122576]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v249: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:00 compute-0 sudo[122728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slejcaiufbpbqwgtsrcglxonbvffcyda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678339.885867-61-249576910467728/AnsiballZ_file.py'
Jan 29 09:19:00 compute-0 sudo[122728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:00 compute-0 python3.9[122730]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:19:00 compute-0 sudo[122728]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:00 compute-0 sshd-session[121812]: Connection closed by 192.168.122.30 port 32924
Jan 29 09:19:00 compute-0 sshd-session[121809]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:19:00 compute-0 systemd-logind[799]: Session 41 logged out. Waiting for processes to exit.
Jan 29 09:19:00 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Jan 29 09:19:00 compute-0 systemd[1]: session-41.scope: Consumed 3.419s CPU time.
Jan 29 09:19:00 compute-0 systemd-logind[799]: Removed session 41.
Jan 29 09:19:01 compute-0 ceph-mon[75183]: pgmap v249: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:19:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:19:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:19:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:19:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:19:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:19:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:19:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:19:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:19:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:19:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:19:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:19:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0578630957479565e-06 of space, bias 4.0, pg target 0.0012694357148975478 quantized to 16 (current 32)
Jan 29 09:19:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:19:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:19:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v250: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:02 compute-0 ceph-mon[75183]: pgmap v250: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:19:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v251: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:05 compute-0 ceph-mon[75183]: pgmap v251: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v252: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:06 compute-0 ceph-mon[75183]: pgmap v252: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:06 compute-0 sshd-session[122755]: Accepted publickey for zuul from 192.168.122.30 port 57700 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:19:06 compute-0 systemd-logind[799]: New session 42 of user zuul.
Jan 29 09:19:06 compute-0 systemd[1]: Started Session 42 of User zuul.
Jan 29 09:19:06 compute-0 sshd-session[122755]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:19:07 compute-0 python3.9[122908]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:19:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v253: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:08 compute-0 sudo[123062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfrclyjlhrgqwjoaidqilsnkkvjnjwje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678347.9485214-29-182313464304579/AnsiballZ_setup.py'
Jan 29 09:19:08 compute-0 sudo[123062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:19:08 compute-0 python3.9[123064]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 09:19:08 compute-0 sudo[123062]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:09 compute-0 sudo[123146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwsqstrxkuvuqfyqapfcbujuyilejmin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678347.9485214-29-182313464304579/AnsiballZ_dnf.py'
Jan 29 09:19:09 compute-0 sudo[123146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:09 compute-0 ceph-mon[75183]: pgmap v253: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:09 compute-0 python3.9[123148]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 29 09:19:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v254: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:10 compute-0 ceph-mon[75183]: pgmap v254: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:10 compute-0 sudo[123146]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:11 compute-0 python3.9[123299]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:19:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v255: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:12 compute-0 python3.9[123450]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 29 09:19:13 compute-0 ceph-mon[75183]: pgmap v255: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:13 compute-0 python3.9[123600]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:19:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:19:13 compute-0 python3.9[123750]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:19:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v256: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:14 compute-0 sshd-session[122758]: Connection closed by 192.168.122.30 port 57700
Jan 29 09:19:14 compute-0 sshd-session[122755]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:19:14 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Jan 29 09:19:14 compute-0 systemd[1]: session-42.scope: Consumed 5.587s CPU time.
Jan 29 09:19:14 compute-0 systemd-logind[799]: Session 42 logged out. Waiting for processes to exit.
Jan 29 09:19:14 compute-0 systemd-logind[799]: Removed session 42.
Jan 29 09:19:15 compute-0 ceph-mon[75183]: pgmap v256: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v257: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:17 compute-0 ceph-mon[75183]: pgmap v257: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v258: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:19:19 compute-0 ceph-mon[75183]: pgmap v258: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:19 compute-0 sshd-session[123776]: Accepted publickey for zuul from 192.168.122.30 port 47408 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:19:19 compute-0 systemd-logind[799]: New session 43 of user zuul.
Jan 29 09:19:19 compute-0 systemd[1]: Started Session 43 of User zuul.
Jan 29 09:19:19 compute-0 sshd-session[123776]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:19:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v259: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:20 compute-0 ceph-mon[75183]: pgmap v259: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:20 compute-0 python3.9[123929]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:19:21 compute-0 sudo[124083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stqdvltdhinpioayotxuzdgjaexjkksl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678361.4654958-45-127165573555034/AnsiballZ_file.py'
Jan 29 09:19:21 compute-0 sudo[124083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:21 compute-0 sshd-session[71363]: Received disconnect from 38.129.56.236 port 33270:11: disconnected by user
Jan 29 09:19:21 compute-0 sshd-session[71363]: Disconnected from user zuul 38.129.56.236 port 33270
Jan 29 09:19:21 compute-0 sshd-session[71360]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:19:21 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 29 09:19:21 compute-0 systemd[1]: session-17.scope: Consumed 1min 36.686s CPU time.
Jan 29 09:19:21 compute-0 systemd-logind[799]: Session 17 logged out. Waiting for processes to exit.
Jan 29 09:19:21 compute-0 systemd-logind[799]: Removed session 17.
Jan 29 09:19:22 compute-0 python3.9[124085]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:19:22 compute-0 sudo[124083]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v260: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:22 compute-0 sudo[124235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucwusjdvcclepdysftzsnlvwwwfckhtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678362.2103155-45-149407013895739/AnsiballZ_file.py'
Jan 29 09:19:22 compute-0 sudo[124235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:22 compute-0 python3.9[124237]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:19:22 compute-0 sudo[124235]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:23 compute-0 ceph-mon[75183]: pgmap v260: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:23 compute-0 sudo[124387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohmvtfffgfhexajtkkzlzuxqbsjnbwcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678362.8346744-60-180096522046044/AnsiballZ_stat.py'
Jan 29 09:19:23 compute-0 sudo[124387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:23 compute-0 python3.9[124389]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:19:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:19:23 compute-0 sudo[124387]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:23 compute-0 sudo[124510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iysyiwvoacfbbahlmrcigyuzeufhfret ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678362.8346744-60-180096522046044/AnsiballZ_copy.py'
Jan 29 09:19:23 compute-0 sudo[124510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v261: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:24 compute-0 python3.9[124512]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678362.8346744-60-180096522046044/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=9e027d209f96b73516e9e075bf3b04f4e1678c4f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:19:24 compute-0 sudo[124510]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:24 compute-0 sudo[124662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urcrgvjwxbnjhbpnghgipgaefpywnlge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678364.3299336-60-199381108996225/AnsiballZ_stat.py'
Jan 29 09:19:24 compute-0 sudo[124662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:24 compute-0 python3.9[124664]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:19:24 compute-0 sudo[124662]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:25 compute-0 sudo[124785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjhvsxkwongomuknbmdwepddzwzfajdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678364.3299336-60-199381108996225/AnsiballZ_copy.py'
Jan 29 09:19:25 compute-0 sudo[124785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:25 compute-0 ceph-mon[75183]: pgmap v261: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:25 compute-0 python3.9[124787]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678364.3299336-60-199381108996225/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=2d1628cd0fc674e4fd280bb1deef4cd79f25bde5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:19:25 compute-0 sudo[124785]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:25 compute-0 sudo[124937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihgttfshhzghjbvvpzdhqjihzcrjcnpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678365.5781367-60-103025866846614/AnsiballZ_stat.py'
Jan 29 09:19:25 compute-0 sudo[124937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:26 compute-0 python3.9[124939]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:19:26 compute-0 sudo[124937]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v262: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:26 compute-0 sudo[125060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-regvvdovgyaxcocolziwjaptbofvbhcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678365.5781367-60-103025866846614/AnsiballZ_copy.py'
Jan 29 09:19:26 compute-0 sudo[125060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:19:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:19:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:19:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:19:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:19:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:19:26 compute-0 python3.9[125062]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678365.5781367-60-103025866846614/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=1cb6b66a6570b6db92ae838259c9e57b78177c89 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:19:26 compute-0 sudo[125060]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:27 compute-0 sudo[125212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqlurocnuipdrlyvwaoctdnczdwkhvsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678366.7700288-104-174472718813945/AnsiballZ_file.py'
Jan 29 09:19:27 compute-0 sudo[125212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:27 compute-0 ceph-mon[75183]: pgmap v262: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:27 compute-0 python3.9[125214]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:19:27 compute-0 sudo[125212]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:27 compute-0 sudo[125364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vemdzxqgpuhwavjwpthjxxuhjkqqfeum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678367.3954837-104-128186113419255/AnsiballZ_file.py'
Jan 29 09:19:27 compute-0 sudo[125364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:27 compute-0 python3.9[125366]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:19:27 compute-0 sudo[125364]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v263: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:28 compute-0 ceph-mon[75183]: pgmap v263: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:28 compute-0 sudo[125516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrdfpvexjizsuqfmdnlkmuryuxvnlyej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678368.065121-119-173136820605669/AnsiballZ_stat.py'
Jan 29 09:19:28 compute-0 sudo[125516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:19:28 compute-0 python3.9[125518]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:19:28 compute-0 sudo[125516]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:28 compute-0 sudo[125639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcehtjxgltvgtnnwmgrlpubcebizmmzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678368.065121-119-173136820605669/AnsiballZ_copy.py'
Jan 29 09:19:28 compute-0 sudo[125639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:29 compute-0 python3.9[125641]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678368.065121-119-173136820605669/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=ff1baa5557e1b3f5adedabf3c474ee2a182417b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:19:29 compute-0 sudo[125639]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:29 compute-0 sudo[125791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfsxpxzidsinjbmvomsxvgznkhpgeeni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678369.2110314-119-76489422555629/AnsiballZ_stat.py'
Jan 29 09:19:29 compute-0 sudo[125791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:29 compute-0 python3.9[125793]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:19:29 compute-0 sudo[125791]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:30 compute-0 sudo[125914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvvnyzyeurkzuttiuzwdesbmrzfwfewx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678369.2110314-119-76489422555629/AnsiballZ_copy.py'
Jan 29 09:19:30 compute-0 sudo[125914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v264: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:30 compute-0 python3.9[125916]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678369.2110314-119-76489422555629/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=b53e97113f159c25260833db9b7a13f209847f83 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:19:30 compute-0 sudo[125914]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:30 compute-0 ceph-mon[75183]: pgmap v264: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:30 compute-0 sudo[126066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arlwjiiirvkfjogvwihgmyjatgbidlyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678370.3454957-119-243542257284694/AnsiballZ_stat.py'
Jan 29 09:19:30 compute-0 sudo[126066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:30 compute-0 python3.9[126068]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:19:30 compute-0 sudo[126066]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:31 compute-0 sudo[126189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsucduqarteiujoxstluwbvqffaklnrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678370.3454957-119-243542257284694/AnsiballZ_copy.py'
Jan 29 09:19:31 compute-0 sudo[126189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:31 compute-0 python3.9[126191]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678370.3454957-119-243542257284694/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=cb27430013af39376f7d084c57a988069da99872 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:19:31 compute-0 sudo[126189]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:31 compute-0 sudo[126341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aufsgjnjsopgdqhqswoikhhwlsymukwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678371.488321-163-202741675337220/AnsiballZ_file.py'
Jan 29 09:19:31 compute-0 sudo[126341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:31 compute-0 python3.9[126343]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:19:31 compute-0 sudo[126341]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v265: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:32 compute-0 sudo[126493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbdbmgrshgjglsvdtylqpkacxebwyagy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678372.0916815-163-201973582918823/AnsiballZ_file.py'
Jan 29 09:19:32 compute-0 sudo[126493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:32 compute-0 ceph-mon[75183]: pgmap v265: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:32 compute-0 python3.9[126495]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:19:32 compute-0 sudo[126493]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:33 compute-0 sudo[126645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwmfcwdzbajohqvrvpvrpxknfsencovb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678372.8707724-178-242634487683858/AnsiballZ_stat.py'
Jan 29 09:19:33 compute-0 sudo[126645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:33 compute-0 python3.9[126647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:19:33 compute-0 sudo[126645]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:19:33 compute-0 sudo[126768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkrmisekoqihmkjfgbrbgntpwtjkgiph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678372.8707724-178-242634487683858/AnsiballZ_copy.py'
Jan 29 09:19:33 compute-0 sudo[126768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:33 compute-0 python3.9[126770]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678372.8707724-178-242634487683858/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=6622ce4d21646fe079da5879af04d692ae77e3d5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:19:33 compute-0 sudo[126768]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v266: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:34 compute-0 sudo[126920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvaqqoregendcvdfadbomrrnldoeaacg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678373.981867-178-60850025253275/AnsiballZ_stat.py'
Jan 29 09:19:34 compute-0 sudo[126920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:34 compute-0 python3.9[126922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:19:34 compute-0 sudo[126920]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:34 compute-0 sudo[127043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcpfmkmlsaymjhzqnbkbaivhmizuzrtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678373.981867-178-60850025253275/AnsiballZ_copy.py'
Jan 29 09:19:34 compute-0 sudo[127043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:35 compute-0 python3.9[127045]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678373.981867-178-60850025253275/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=b53e97113f159c25260833db9b7a13f209847f83 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:19:35 compute-0 sudo[127043]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:35 compute-0 ceph-mon[75183]: pgmap v266: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:35 compute-0 sudo[127195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twonuolijtfmvaqpjyckvicbgrhaaplg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678375.166667-178-222018451566719/AnsiballZ_stat.py'
Jan 29 09:19:35 compute-0 sudo[127195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:35 compute-0 python3.9[127197]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:19:35 compute-0 sudo[127195]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:35 compute-0 sudo[127318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqsabmptotcfsxbcmezabvibtpcmqtpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678375.166667-178-222018451566719/AnsiballZ_copy.py'
Jan 29 09:19:35 compute-0 sudo[127318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v267: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:36 compute-0 python3.9[127320]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678375.166667-178-222018451566719/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=6452188b2b11317d7b97c44e650ab417798d237b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:19:36 compute-0 sudo[127318]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:36 compute-0 ceph-mon[75183]: pgmap v267: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:37 compute-0 sudo[127470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-signcebuilnvywseeftysixkzgziyogl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678376.804862-238-137764650779611/AnsiballZ_file.py'
Jan 29 09:19:37 compute-0 sudo[127470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:37 compute-0 python3.9[127472]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:19:37 compute-0 sudo[127470]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:37 compute-0 sudo[127622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcgbomyxgmfzmfajmatbygwsbsrhbjey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678377.4638746-246-59440329238752/AnsiballZ_stat.py'
Jan 29 09:19:37 compute-0 sudo[127622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:37 compute-0 python3.9[127624]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:19:37 compute-0 sudo[127622]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v268: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:38 compute-0 sudo[127745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yevxiftwocwmsmelagsgpawpaquayvjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678377.4638746-246-59440329238752/AnsiballZ_copy.py'
Jan 29 09:19:38 compute-0 sudo[127745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:19:38 compute-0 python3.9[127747]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678377.4638746-246-59440329238752/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0a3de624a7429dca03fef8d39cefcc5051a17ce1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:19:38 compute-0 sudo[127745]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:38 compute-0 sudo[127897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qebvsagjynlebhsyyzfdfoweeoaxwhyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678378.716201-262-266681906625755/AnsiballZ_file.py'
Jan 29 09:19:38 compute-0 sudo[127897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:39 compute-0 ceph-mon[75183]: pgmap v268: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:39 compute-0 python3.9[127899]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:19:39 compute-0 sudo[127897]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:39 compute-0 sudo[128049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwuxcoylqesxbfbdmsiweicwhnkorzmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678379.3729427-270-193641178314561/AnsiballZ_stat.py'
Jan 29 09:19:39 compute-0 sudo[128049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:39 compute-0 python3.9[128051]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:19:39 compute-0 sudo[128049]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v269: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:40 compute-0 sudo[128172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csnumbkupfcqkcxxdepweqllezhouicq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678379.3729427-270-193641178314561/AnsiballZ_copy.py'
Jan 29 09:19:40 compute-0 sudo[128172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:40 compute-0 python3.9[128174]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678379.3729427-270-193641178314561/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0a3de624a7429dca03fef8d39cefcc5051a17ce1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:19:40 compute-0 sudo[128172]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:40 compute-0 sudo[128324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgxjibusubrvttlfejofgevtyqzyxpox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678380.5265963-286-217113968902224/AnsiballZ_file.py'
Jan 29 09:19:40 compute-0 sudo[128324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:41 compute-0 python3.9[128326]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:19:41 compute-0 sudo[128324]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:41 compute-0 ceph-mon[75183]: pgmap v269: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:41 compute-0 sudo[128476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwbhdrkcbavjyesrcebmgcphqhvuilox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678381.1809149-294-11276095192343/AnsiballZ_stat.py'
Jan 29 09:19:41 compute-0 sudo[128476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:41 compute-0 python3.9[128478]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:19:41 compute-0 sudo[128476]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:41 compute-0 sudo[128599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdhszsjanockodtszswuvrzqmxhkuczp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678381.1809149-294-11276095192343/AnsiballZ_copy.py'
Jan 29 09:19:41 compute-0 sudo[128599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v270: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:42 compute-0 python3.9[128601]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678381.1809149-294-11276095192343/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0a3de624a7429dca03fef8d39cefcc5051a17ce1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:19:42 compute-0 sudo[128599]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:42 compute-0 sudo[128751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsplrseijagsskoizzdkseeqgstacaml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678382.4570522-310-74351961798040/AnsiballZ_file.py'
Jan 29 09:19:42 compute-0 sudo[128751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:42 compute-0 ceph-mon[75183]: pgmap v270: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:42 compute-0 python3.9[128753]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:19:42 compute-0 sudo[128751]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:43 compute-0 sudo[128903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wemunxzuflcmduojincsvdquboxarqbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678383.0827348-318-140577929232306/AnsiballZ_stat.py'
Jan 29 09:19:43 compute-0 sudo[128903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:19:43 compute-0 python3.9[128905]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:19:43 compute-0 sudo[128903]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:43 compute-0 sudo[129026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmlzcxiwtcsckmqiiozptebvzwsveumj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678383.0827348-318-140577929232306/AnsiballZ_copy.py'
Jan 29 09:19:43 compute-0 sudo[129026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v271: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:44 compute-0 python3.9[129028]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678383.0827348-318-140577929232306/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0a3de624a7429dca03fef8d39cefcc5051a17ce1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:19:44 compute-0 sudo[129026]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:44 compute-0 ceph-mon[75183]: pgmap v271: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:44 compute-0 sudo[129056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:19:44 compute-0 sudo[129056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:19:44 compute-0 sudo[129056]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:44 compute-0 sudo[129105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:19:44 compute-0 sudo[129105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:19:44 compute-0 sudo[129228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfztwfafmkbxdownffsttcswzekxvdke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678384.3609078-334-9257826832697/AnsiballZ_file.py'
Jan 29 09:19:44 compute-0 sudo[129228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:44 compute-0 python3.9[129230]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:19:44 compute-0 sudo[129228]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:44 compute-0 sudo[129105]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:19:44 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:19:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:19:44 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:19:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:19:44 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:19:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:19:44 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:19:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:19:44 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:19:44 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 29 09:19:44 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:19:44.958707) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 09:19:44 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 29 09:19:44 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678384958815, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6573, "num_deletes": 251, "total_data_size": 7238735, "memory_usage": 7378352, "flush_reason": "Manual Compaction"}
Jan 29 09:19:44 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 29 09:19:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:19:44 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678385000245, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 5430495, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 6716, "table_properties": {"data_size": 5407988, "index_size": 14329, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7109, "raw_key_size": 62973, "raw_average_key_size": 22, "raw_value_size": 5354708, "raw_average_value_size": 1890, "num_data_blocks": 644, "num_entries": 2833, "num_filter_entries": 2833, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677898, "oldest_key_time": 1769677898, "file_creation_time": 1769678384, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 41607 microseconds, and 13343 cpu microseconds.
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:19:45.000329) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 5430495 bytes OK
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:19:45.000363) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:19:45.003609) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:19:45.003644) EVENT_LOG_v1 {"time_micros": 1769678385003637, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:19:45.003689) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 7210941, prev total WAL file size 7210941, number of live WAL files 2.
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:19:45.005299) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(5303KB) 13(58KB) 8(1944B)]
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678385005403, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 5492399, "oldest_snapshot_seqno": -1}
Jan 29 09:19:45 compute-0 sudo[129287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:19:45 compute-0 sudo[129287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:19:45 compute-0 sudo[129287]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 2659 keys, 5445384 bytes, temperature: kUnknown
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678385035656, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 5445384, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5423213, "index_size": 14446, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6661, "raw_key_size": 61413, "raw_average_key_size": 23, "raw_value_size": 5371228, "raw_average_value_size": 2020, "num_data_blocks": 649, "num_entries": 2659, "num_filter_entries": 2659, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677896, "oldest_key_time": 0, "file_creation_time": 1769678385, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:19:45.035976) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 5445384 bytes
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:19:45.037844) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.8 rd, 179.3 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(5.2, 0.0 +0.0 blob) out(5.2 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 2948, records dropped: 289 output_compression: NoCompression
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:19:45.037870) EVENT_LOG_v1 {"time_micros": 1769678385037856, "job": 4, "event": "compaction_finished", "compaction_time_micros": 30375, "compaction_time_cpu_micros": 11266, "output_level": 6, "num_output_files": 1, "total_output_size": 5445384, "num_input_records": 2948, "num_output_records": 2659, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678385039097, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678385039339, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678385039374, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 29 09:19:45 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:19:45.005195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:19:45 compute-0 sudo[129335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:19:45 compute-0 sudo[129335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:19:45 compute-0 sudo[129462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwwyrwtvycbjltqpqhifxyqpforxzxfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678385.02093-342-2590048000583/AnsiballZ_stat.py'
Jan 29 09:19:45 compute-0 sudo[129462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:19:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:19:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:19:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:19:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:19:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:19:45 compute-0 podman[129477]: 2026-01-29 09:19:45.375051253 +0000 UTC m=+0.045932569 container create b38d330d5bf88c3445b92263de21442d3bec37639689219365ede2e755c6610d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_panini, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:19:45 compute-0 systemd[1]: Started libpod-conmon-b38d330d5bf88c3445b92263de21442d3bec37639689219365ede2e755c6610d.scope.
Jan 29 09:19:45 compute-0 podman[129477]: 2026-01-29 09:19:45.353783194 +0000 UTC m=+0.024664530 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:19:45 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:19:45 compute-0 podman[129477]: 2026-01-29 09:19:45.472945778 +0000 UTC m=+0.143827104 container init b38d330d5bf88c3445b92263de21442d3bec37639689219365ede2e755c6610d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_panini, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:19:45 compute-0 podman[129477]: 2026-01-29 09:19:45.483327695 +0000 UTC m=+0.154209001 container start b38d330d5bf88c3445b92263de21442d3bec37639689219365ede2e755c6610d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:19:45 compute-0 podman[129477]: 2026-01-29 09:19:45.488003213 +0000 UTC m=+0.158884719 container attach b38d330d5bf88c3445b92263de21442d3bec37639689219365ede2e755c6610d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 29 09:19:45 compute-0 magical_panini[129493]: 167 167
Jan 29 09:19:45 compute-0 systemd[1]: libpod-b38d330d5bf88c3445b92263de21442d3bec37639689219365ede2e755c6610d.scope: Deactivated successfully.
Jan 29 09:19:45 compute-0 podman[129477]: 2026-01-29 09:19:45.493734942 +0000 UTC m=+0.164616268 container died b38d330d5bf88c3445b92263de21442d3bec37639689219365ede2e755c6610d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_panini, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 09:19:45 compute-0 python3.9[129466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:19:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-324ac90aba0a2e32b5bc7a1c036669176713f794a01851f1b74386e9735b6b0e-merged.mount: Deactivated successfully.
Jan 29 09:19:45 compute-0 sudo[129462]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:45 compute-0 podman[129477]: 2026-01-29 09:19:45.604681393 +0000 UTC m=+0.275562699 container remove b38d330d5bf88c3445b92263de21442d3bec37639689219365ede2e755c6610d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_panini, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:19:45 compute-0 systemd[1]: libpod-conmon-b38d330d5bf88c3445b92263de21442d3bec37639689219365ede2e755c6610d.scope: Deactivated successfully.
Jan 29 09:19:45 compute-0 podman[129540]: 2026-01-29 09:19:45.71144661 +0000 UTC m=+0.021965051 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:19:45 compute-0 podman[129540]: 2026-01-29 09:19:45.816241119 +0000 UTC m=+0.126759540 container create 3cfec743099686db47b72bc84f7961495dd632f2acd245bdba22b60c71779b0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sinoussi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:19:45 compute-0 systemd[1]: Started libpod-conmon-3cfec743099686db47b72bc84f7961495dd632f2acd245bdba22b60c71779b0b.scope.
Jan 29 09:19:45 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9693210ef12940f87b39ef769199014473cc80b369634b83740ff3fa4ef70a70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9693210ef12940f87b39ef769199014473cc80b369634b83740ff3fa4ef70a70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9693210ef12940f87b39ef769199014473cc80b369634b83740ff3fa4ef70a70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9693210ef12940f87b39ef769199014473cc80b369634b83740ff3fa4ef70a70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9693210ef12940f87b39ef769199014473cc80b369634b83740ff3fa4ef70a70/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:19:45 compute-0 podman[129540]: 2026-01-29 09:19:45.994097797 +0000 UTC m=+0.304616238 container init 3cfec743099686db47b72bc84f7961495dd632f2acd245bdba22b60c71779b0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:19:46 compute-0 podman[129540]: 2026-01-29 09:19:46.002436424 +0000 UTC m=+0.312954845 container start 3cfec743099686db47b72bc84f7961495dd632f2acd245bdba22b60c71779b0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sinoussi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 09:19:46 compute-0 sudo[129658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytshgasypxfevmssbtqesprflivxfceu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678385.02093-342-2590048000583/AnsiballZ_copy.py'
Jan 29 09:19:46 compute-0 sudo[129658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:46 compute-0 podman[129540]: 2026-01-29 09:19:46.037896882 +0000 UTC m=+0.348415333 container attach 3cfec743099686db47b72bc84f7961495dd632f2acd245bdba22b60c71779b0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sinoussi, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 09:19:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v272: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:46 compute-0 python3.9[129660]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678385.02093-342-2590048000583/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0a3de624a7429dca03fef8d39cefcc5051a17ce1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:19:46 compute-0 ceph-mon[75183]: pgmap v272: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:46 compute-0 sudo[129658]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:46 compute-0 angry_sinoussi[129603]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:19:46 compute-0 angry_sinoussi[129603]: --> All data devices are unavailable
Jan 29 09:19:46 compute-0 systemd[1]: libpod-3cfec743099686db47b72bc84f7961495dd632f2acd245bdba22b60c71779b0b.scope: Deactivated successfully.
Jan 29 09:19:46 compute-0 podman[129540]: 2026-01-29 09:19:46.498260025 +0000 UTC m=+0.808778466 container died 3cfec743099686db47b72bc84f7961495dd632f2acd245bdba22b60c71779b0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sinoussi, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:19:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-9693210ef12940f87b39ef769199014473cc80b369634b83740ff3fa4ef70a70-merged.mount: Deactivated successfully.
Jan 29 09:19:46 compute-0 podman[129540]: 2026-01-29 09:19:46.665312034 +0000 UTC m=+0.975830455 container remove 3cfec743099686db47b72bc84f7961495dd632f2acd245bdba22b60c71779b0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:19:46 compute-0 systemd[1]: libpod-conmon-3cfec743099686db47b72bc84f7961495dd632f2acd245bdba22b60c71779b0b.scope: Deactivated successfully.
Jan 29 09:19:46 compute-0 sudo[129335]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:46 compute-0 sudo[129861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhopjzsqzithbrtpeuhipikopphzshwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678386.4987168-358-106250879498352/AnsiballZ_file.py'
Jan 29 09:19:46 compute-0 sudo[129861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:46 compute-0 sudo[129820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:19:46 compute-0 sudo[129820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:19:46 compute-0 sudo[129820]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:46 compute-0 sudo[129869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:19:46 compute-0 sudo[129869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:19:46 compute-0 python3.9[129867]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:19:47 compute-0 sudo[129861]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:47 compute-0 podman[129924]: 2026-01-29 09:19:47.101190093 +0000 UTC m=+0.036627545 container create 18decd3b219a657e99e0ba795c8f73ca2590bdf60914a94a1a42fac36106ea8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bose, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:19:47 compute-0 systemd[1]: Started libpod-conmon-18decd3b219a657e99e0ba795c8f73ca2590bdf60914a94a1a42fac36106ea8d.scope.
Jan 29 09:19:47 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:19:47 compute-0 podman[129924]: 2026-01-29 09:19:47.084910001 +0000 UTC m=+0.020347473 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:19:47 compute-0 podman[129924]: 2026-01-29 09:19:47.202795857 +0000 UTC m=+0.138233309 container init 18decd3b219a657e99e0ba795c8f73ca2590bdf60914a94a1a42fac36106ea8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:19:47 compute-0 podman[129924]: 2026-01-29 09:19:47.210810594 +0000 UTC m=+0.146248036 container start 18decd3b219a657e99e0ba795c8f73ca2590bdf60914a94a1a42fac36106ea8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bose, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:19:47 compute-0 priceless_bose[129967]: 167 167
Jan 29 09:19:47 compute-0 systemd[1]: libpod-18decd3b219a657e99e0ba795c8f73ca2590bdf60914a94a1a42fac36106ea8d.scope: Deactivated successfully.
Jan 29 09:19:47 compute-0 podman[129924]: 2026-01-29 09:19:47.231122055 +0000 UTC m=+0.166559507 container attach 18decd3b219a657e99e0ba795c8f73ca2590bdf60914a94a1a42fac36106ea8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 29 09:19:47 compute-0 podman[129924]: 2026-01-29 09:19:47.23165555 +0000 UTC m=+0.167093012 container died 18decd3b219a657e99e0ba795c8f73ca2590bdf60914a94a1a42fac36106ea8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bose, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:19:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7635f94ee6069c74bdb38035a09dd37891f33b910f97b919f7a9ef00f5369466-merged.mount: Deactivated successfully.
Jan 29 09:19:47 compute-0 podman[129924]: 2026-01-29 09:19:47.291682835 +0000 UTC m=+0.227120287 container remove 18decd3b219a657e99e0ba795c8f73ca2590bdf60914a94a1a42fac36106ea8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bose, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 29 09:19:47 compute-0 systemd[1]: libpod-conmon-18decd3b219a657e99e0ba795c8f73ca2590bdf60914a94a1a42fac36106ea8d.scope: Deactivated successfully.
Jan 29 09:19:47 compute-0 sudo[130099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buuwtiouqegdcxnbxpyqrfaugqzczncn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678387.150069-366-51387322153653/AnsiballZ_stat.py'
Jan 29 09:19:47 compute-0 sudo[130099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:47 compute-0 podman[130095]: 2026-01-29 09:19:47.461088513 +0000 UTC m=+0.071863135 container create f4e23ae12fc6453595f37be06370a83c1ecc5e8b742ecfc5746b8d518424c7cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:19:47 compute-0 systemd[1]: Started libpod-conmon-f4e23ae12fc6453595f37be06370a83c1ecc5e8b742ecfc5746b8d518424c7cd.scope.
Jan 29 09:19:47 compute-0 podman[130095]: 2026-01-29 09:19:47.420643608 +0000 UTC m=+0.031418260 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:19:47 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d601d03f9aaf33f48f21945f27f90b22edda9d4cfe96572ad5206c1622271ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d601d03f9aaf33f48f21945f27f90b22edda9d4cfe96572ad5206c1622271ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d601d03f9aaf33f48f21945f27f90b22edda9d4cfe96572ad5206c1622271ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d601d03f9aaf33f48f21945f27f90b22edda9d4cfe96572ad5206c1622271ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:19:47 compute-0 podman[130095]: 2026-01-29 09:19:47.565322935 +0000 UTC m=+0.176097577 container init f4e23ae12fc6453595f37be06370a83c1ecc5e8b742ecfc5746b8d518424c7cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_goodall, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:19:47 compute-0 podman[130095]: 2026-01-29 09:19:47.573805256 +0000 UTC m=+0.184579878 container start f4e23ae12fc6453595f37be06370a83c1ecc5e8b742ecfc5746b8d518424c7cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_goodall, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 09:19:47 compute-0 podman[130095]: 2026-01-29 09:19:47.585267445 +0000 UTC m=+0.196042087 container attach f4e23ae12fc6453595f37be06370a83c1ecc5e8b742ecfc5746b8d518424c7cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_goodall, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 09:19:47 compute-0 python3.9[130110]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:19:47 compute-0 sudo[130099]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:47 compute-0 nifty_goodall[130118]: {
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:     "0": [
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:         {
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "devices": [
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "/dev/loop3"
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             ],
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "lv_name": "ceph_lv0",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "lv_size": "21470642176",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "name": "ceph_lv0",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "tags": {
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.cluster_name": "ceph",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.crush_device_class": "",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.encrypted": "0",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.objectstore": "bluestore",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.osd_id": "0",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.type": "block",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.vdo": "0",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.with_tpm": "0"
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             },
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "type": "block",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "vg_name": "ceph_vg0"
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:         }
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:     ],
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:     "1": [
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:         {
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "devices": [
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "/dev/loop4"
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             ],
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "lv_name": "ceph_lv1",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "lv_size": "21470642176",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "name": "ceph_lv1",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "tags": {
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.cluster_name": "ceph",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.crush_device_class": "",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.encrypted": "0",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.objectstore": "bluestore",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.osd_id": "1",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.type": "block",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.vdo": "0",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.with_tpm": "0"
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             },
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "type": "block",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "vg_name": "ceph_vg1"
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:         }
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:     ],
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:     "2": [
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:         {
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "devices": [
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "/dev/loop5"
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             ],
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "lv_name": "ceph_lv2",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "lv_size": "21470642176",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "name": "ceph_lv2",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "tags": {
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.cluster_name": "ceph",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.crush_device_class": "",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.encrypted": "0",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.objectstore": "bluestore",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.osd_id": "2",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.type": "block",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.vdo": "0",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:                 "ceph.with_tpm": "0"
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             },
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "type": "block",
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:             "vg_name": "ceph_vg2"
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:         }
Jan 29 09:19:47 compute-0 nifty_goodall[130118]:     ]
Jan 29 09:19:47 compute-0 nifty_goodall[130118]: }
Jan 29 09:19:47 compute-0 systemd[1]: libpod-f4e23ae12fc6453595f37be06370a83c1ecc5e8b742ecfc5746b8d518424c7cd.scope: Deactivated successfully.
Jan 29 09:19:47 compute-0 podman[130095]: 2026-01-29 09:19:47.904825634 +0000 UTC m=+0.515600266 container died f4e23ae12fc6453595f37be06370a83c1ecc5e8b742ecfc5746b8d518424c7cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:19:47 compute-0 sudo[130260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reedpjsagywhbesyvpfrjgtndfhwistu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678387.150069-366-51387322153653/AnsiballZ_copy.py'
Jan 29 09:19:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d601d03f9aaf33f48f21945f27f90b22edda9d4cfe96572ad5206c1622271ac-merged.mount: Deactivated successfully.
Jan 29 09:19:47 compute-0 sudo[130260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:48 compute-0 podman[130095]: 2026-01-29 09:19:48.035920221 +0000 UTC m=+0.646694843 container remove f4e23ae12fc6453595f37be06370a83c1ecc5e8b742ecfc5746b8d518424c7cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:19:48 compute-0 systemd[1]: libpod-conmon-f4e23ae12fc6453595f37be06370a83c1ecc5e8b742ecfc5746b8d518424c7cd.scope: Deactivated successfully.
Jan 29 09:19:48 compute-0 sudo[129869]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v273: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:48 compute-0 sudo[130263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:19:48 compute-0 sudo[130263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:19:48 compute-0 sudo[130263]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:48 compute-0 python3.9[130262]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678387.150069-366-51387322153653/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0a3de624a7429dca03fef8d39cefcc5051a17ce1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:19:48 compute-0 sudo[130288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:19:48 compute-0 sudo[130288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:19:48 compute-0 sudo[130260]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:48 compute-0 ceph-mon[75183]: pgmap v273: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:19:48 compute-0 podman[130348]: 2026-01-29 09:19:48.460362561 +0000 UTC m=+0.022438225 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:19:48 compute-0 sshd-session[123779]: Connection closed by 192.168.122.30 port 47408
Jan 29 09:19:48 compute-0 sshd-session[123776]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:19:48 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Jan 29 09:19:48 compute-0 systemd[1]: session-43.scope: Consumed 21.987s CPU time.
Jan 29 09:19:48 compute-0 systemd-logind[799]: Session 43 logged out. Waiting for processes to exit.
Jan 29 09:19:48 compute-0 systemd-logind[799]: Removed session 43.
Jan 29 09:19:48 compute-0 podman[130348]: 2026-01-29 09:19:48.619747944 +0000 UTC m=+0.181823588 container create d216f765e490422245cb89dd974277ed8106a2a961e19ba5b7176d7913938a14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chebyshev, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 29 09:19:48 compute-0 systemd[1]: Started libpod-conmon-d216f765e490422245cb89dd974277ed8106a2a961e19ba5b7176d7913938a14.scope.
Jan 29 09:19:48 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:19:48 compute-0 podman[130348]: 2026-01-29 09:19:48.879794923 +0000 UTC m=+0.441870577 container init d216f765e490422245cb89dd974277ed8106a2a961e19ba5b7176d7913938a14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:19:48 compute-0 podman[130348]: 2026-01-29 09:19:48.887912673 +0000 UTC m=+0.449988317 container start d216f765e490422245cb89dd974277ed8106a2a961e19ba5b7176d7913938a14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:19:48 compute-0 unruffled_chebyshev[130364]: 167 167
Jan 29 09:19:48 compute-0 systemd[1]: libpod-d216f765e490422245cb89dd974277ed8106a2a961e19ba5b7176d7913938a14.scope: Deactivated successfully.
Jan 29 09:19:48 compute-0 podman[130348]: 2026-01-29 09:19:48.945238138 +0000 UTC m=+0.507313802 container attach d216f765e490422245cb89dd974277ed8106a2a961e19ba5b7176d7913938a14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 29 09:19:48 compute-0 podman[130348]: 2026-01-29 09:19:48.94630777 +0000 UTC m=+0.508383434 container died d216f765e490422245cb89dd974277ed8106a2a961e19ba5b7176d7913938a14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chebyshev, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 29 09:19:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-12016345f3e97a51e7aef2ea4b3a0a27a927c60a60f609755a1662f0ae7767c6-merged.mount: Deactivated successfully.
Jan 29 09:19:49 compute-0 podman[130348]: 2026-01-29 09:19:49.171780057 +0000 UTC m=+0.733855701 container remove d216f765e490422245cb89dd974277ed8106a2a961e19ba5b7176d7913938a14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chebyshev, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:19:49 compute-0 systemd[1]: libpod-conmon-d216f765e490422245cb89dd974277ed8106a2a961e19ba5b7176d7913938a14.scope: Deactivated successfully.
Jan 29 09:19:49 compute-0 podman[130388]: 2026-01-29 09:19:49.281285694 +0000 UTC m=+0.022989430 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:19:49 compute-0 podman[130388]: 2026-01-29 09:19:49.381620131 +0000 UTC m=+0.123323837 container create 00238b2130d3b889ea07281f3f91ed017c6dab828d889dd19c87b320f6f6a9c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_haibt, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:19:49 compute-0 systemd[1]: Started libpod-conmon-00238b2130d3b889ea07281f3f91ed017c6dab828d889dd19c87b320f6f6a9c2.scope.
Jan 29 09:19:49 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:19:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3089fc80bc5a27d8ee0278a437b36a90a55c50557af646e142be34f269f38ac4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:19:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3089fc80bc5a27d8ee0278a437b36a90a55c50557af646e142be34f269f38ac4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:19:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3089fc80bc5a27d8ee0278a437b36a90a55c50557af646e142be34f269f38ac4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:19:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3089fc80bc5a27d8ee0278a437b36a90a55c50557af646e142be34f269f38ac4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:19:49 compute-0 podman[130388]: 2026-01-29 09:19:49.492319464 +0000 UTC m=+0.234023200 container init 00238b2130d3b889ea07281f3f91ed017c6dab828d889dd19c87b320f6f6a9c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_haibt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 29 09:19:49 compute-0 podman[130388]: 2026-01-29 09:19:49.497377404 +0000 UTC m=+0.239081110 container start 00238b2130d3b889ea07281f3f91ed017c6dab828d889dd19c87b320f6f6a9c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_haibt, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 29 09:19:49 compute-0 podman[130388]: 2026-01-29 09:19:49.504210256 +0000 UTC m=+0.245914152 container attach 00238b2130d3b889ea07281f3f91ed017c6dab828d889dd19c87b320f6f6a9c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_haibt, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 29 09:19:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v274: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:50 compute-0 lvm[130481]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:19:50 compute-0 lvm[130481]: VG ceph_vg0 finished
Jan 29 09:19:50 compute-0 lvm[130484]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:19:50 compute-0 lvm[130484]: VG ceph_vg1 finished
Jan 29 09:19:50 compute-0 lvm[130486]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:19:50 compute-0 lvm[130486]: VG ceph_vg2 finished
Jan 29 09:19:50 compute-0 crazy_haibt[130405]: {}
Jan 29 09:19:50 compute-0 systemd[1]: libpod-00238b2130d3b889ea07281f3f91ed017c6dab828d889dd19c87b320f6f6a9c2.scope: Deactivated successfully.
Jan 29 09:19:50 compute-0 systemd[1]: libpod-00238b2130d3b889ea07281f3f91ed017c6dab828d889dd19c87b320f6f6a9c2.scope: Consumed 1.362s CPU time.
Jan 29 09:19:50 compute-0 podman[130388]: 2026-01-29 09:19:50.43405547 +0000 UTC m=+1.175759196 container died 00238b2130d3b889ea07281f3f91ed017c6dab828d889dd19c87b320f6f6a9c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 09:19:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3089fc80bc5a27d8ee0278a437b36a90a55c50557af646e142be34f269f38ac4-merged.mount: Deactivated successfully.
Jan 29 09:19:50 compute-0 podman[130388]: 2026-01-29 09:19:50.964332699 +0000 UTC m=+1.706036405 container remove 00238b2130d3b889ea07281f3f91ed017c6dab828d889dd19c87b320f6f6a9c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_haibt, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:19:51 compute-0 systemd[1]: libpod-conmon-00238b2130d3b889ea07281f3f91ed017c6dab828d889dd19c87b320f6f6a9c2.scope: Deactivated successfully.
Jan 29 09:19:51 compute-0 sudo[130288]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:19:51 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:19:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:19:51 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:19:51 compute-0 sudo[130501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:19:51 compute-0 sudo[130501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:19:51 compute-0 sudo[130501]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:51 compute-0 ceph-mon[75183]: pgmap v274: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:51 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:19:51 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:19:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v275: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:52 compute-0 ceph-mon[75183]: pgmap v275: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:19:53 compute-0 sshd-session[130526]: Accepted publickey for zuul from 192.168.122.30 port 53172 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:19:53 compute-0 systemd-logind[799]: New session 44 of user zuul.
Jan 29 09:19:53 compute-0 systemd[1]: Started Session 44 of User zuul.
Jan 29 09:19:53 compute-0 sshd-session[130526]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:19:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v276: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:54 compute-0 sudo[130679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhivgfutnbmwkjfiyycrijgvlypaklxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678394.088346-17-41912047122324/AnsiballZ_file.py'
Jan 29 09:19:54 compute-0 sudo[130679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:54 compute-0 python3.9[130681]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:19:54 compute-0 sudo[130679]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:55 compute-0 ceph-mon[75183]: pgmap v276: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:55 compute-0 sudo[130831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcrjrrztideyfembcqvqtmxiyohtvgnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678394.9770985-29-186499395345727/AnsiballZ_stat.py'
Jan 29 09:19:55 compute-0 sudo[130831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:55 compute-0 python3.9[130833]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:19:55 compute-0 sudo[130831]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:19:55
Jan 29 09:19:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:19:55 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:19:55 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['volumes', 'vms', 'images', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'backups']
Jan 29 09:19:55 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:19:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v277: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:56 compute-0 sudo[130954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzjsgpfxaiexghknuqthfrzssjdtbdij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678394.9770985-29-186499395345727/AnsiballZ_copy.py'
Jan 29 09:19:56 compute-0 sudo[130954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:56 compute-0 ceph-mon[75183]: pgmap v277: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:56 compute-0 python3.9[130956]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769678394.9770985-29-186499395345727/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=c5b8879246a4c24c186fd747854ce7c5cfee178a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:19:56 compute-0 sudo[130954]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:19:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:19:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:19:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:19:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:19:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:19:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:19:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:19:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:19:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:19:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:19:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:19:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:19:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:19:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:19:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:19:56 compute-0 sudo[131106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlrxykgqnmuxpzjtnlgocbenrsjiritl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678396.515347-29-238202803730470/AnsiballZ_stat.py'
Jan 29 09:19:56 compute-0 sudo[131106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:56 compute-0 python3.9[131108]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:19:56 compute-0 sudo[131106]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:57 compute-0 sudo[131229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyrqehnnzvflxntjfdlbsckyasnrrbbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678396.515347-29-238202803730470/AnsiballZ_copy.py'
Jan 29 09:19:57 compute-0 sudo[131229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:19:57 compute-0 python3.9[131231]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769678396.515347-29-238202803730470/.source.conf _original_basename=ceph.conf follow=False checksum=3071c038228108cf8d609669040b306964adb70f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:19:57 compute-0 sudo[131229]: pam_unix(sudo:session): session closed for user root
Jan 29 09:19:57 compute-0 sshd-session[130529]: Connection closed by 192.168.122.30 port 53172
Jan 29 09:19:57 compute-0 sshd-session[130526]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:19:57 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Jan 29 09:19:57 compute-0 systemd[1]: session-44.scope: Consumed 2.509s CPU time.
Jan 29 09:19:57 compute-0 systemd-logind[799]: Session 44 logged out. Waiting for processes to exit.
Jan 29 09:19:57 compute-0 systemd-logind[799]: Removed session 44.
Jan 29 09:19:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v278: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:58 compute-0 ceph-mon[75183]: pgmap v278: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:19:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:20:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v279: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:01 compute-0 ceph-mon[75183]: pgmap v279: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:01 compute-0 anacron[30926]: Job `cron.daily' started
Jan 29 09:20:01 compute-0 anacron[30926]: Job `cron.daily' terminated
Jan 29 09:20:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:20:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:20:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:20:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:20:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:20:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:20:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:20:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:20:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:20:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:20:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:20:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:20:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0578630957479565e-06 of space, bias 4.0, pg target 0.0012694357148975478 quantized to 16 (current 32)
Jan 29 09:20:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:20:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:20:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v280: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:02 compute-0 ceph-mon[75183]: pgmap v280: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:03 compute-0 sshd-session[131258]: Accepted publickey for zuul from 192.168.122.30 port 56924 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:20:03 compute-0 systemd-logind[799]: New session 45 of user zuul.
Jan 29 09:20:03 compute-0 systemd[1]: Started Session 45 of User zuul.
Jan 29 09:20:03 compute-0 sshd-session[131258]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:20:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:20:04 compute-0 python3.9[131411]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:20:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v281: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:04 compute-0 sudo[131565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjczbuizvuzzqahwpnsgftyhztgqzwot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678404.403226-29-233781560407402/AnsiballZ_file.py'
Jan 29 09:20:04 compute-0 sudo[131565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:05 compute-0 python3.9[131567]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:20:05 compute-0 sudo[131565]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:05 compute-0 ceph-mon[75183]: pgmap v281: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:05 compute-0 sudo[131717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sknnaaxvblkycdxmmcvjkspgiwuffric ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678405.2089577-29-175193914540735/AnsiballZ_file.py'
Jan 29 09:20:05 compute-0 sudo[131717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:05 compute-0 python3.9[131719]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:20:05 compute-0 sudo[131717]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v282: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:06 compute-0 ceph-mon[75183]: pgmap v282: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:06 compute-0 python3.9[131869]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:20:07 compute-0 sudo[132019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjcliunmigiegjturdjbsbexdrrurlmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678406.7545233-52-22932600260780/AnsiballZ_seboolean.py'
Jan 29 09:20:07 compute-0 sudo[132019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:07 compute-0 python3.9[132021]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 29 09:20:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v283: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:20:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v284: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:10 compute-0 ceph-mon[75183]: pgmap v283: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:10 compute-0 sudo[132019]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:10 compute-0 sudo[132177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhhyuvglnfjltjcwyfwuupoapzjmewkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678410.5710006-62-99582455082386/AnsiballZ_setup.py'
Jan 29 09:20:10 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 29 09:20:10 compute-0 sudo[132177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:11 compute-0 python3.9[132179]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 09:20:11 compute-0 ceph-mon[75183]: pgmap v284: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:11 compute-0 sudo[132177]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:11 compute-0 sudo[132261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otqjxkhppbwbejahfigcmqxvdpxqhuwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678410.5710006-62-99582455082386/AnsiballZ_dnf.py'
Jan 29 09:20:11 compute-0 sudo[132261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:12 compute-0 python3.9[132263]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:20:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v285: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:12 compute-0 ceph-mon[75183]: pgmap v285: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:13 compute-0 sudo[132261]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:20:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v286: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:14 compute-0 sudo[132414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptjjpszucvcyyqaffcowsxypqwzptsok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678413.6100204-74-107158096892232/AnsiballZ_systemd.py'
Jan 29 09:20:14 compute-0 sudo[132414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:14 compute-0 ceph-mon[75183]: pgmap v286: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:14 compute-0 python3.9[132416]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 29 09:20:14 compute-0 sudo[132414]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:15 compute-0 sudo[132569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sopeyhyeacavramhcvisiuxjllnjkdcd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769678414.719029-82-87303697529114/AnsiballZ_edpm_nftables_snippet.py'
Jan 29 09:20:15 compute-0 sudo[132569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:15 compute-0 python3[132571]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 29 09:20:15 compute-0 sudo[132569]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:15 compute-0 sudo[132721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkqtygrvcpcmwocephrmfwxwbyrplbjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678415.6009963-91-33709230431618/AnsiballZ_file.py'
Jan 29 09:20:15 compute-0 sudo[132721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:16 compute-0 python3.9[132723]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:20:16 compute-0 sudo[132721]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v287: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:16 compute-0 sudo[132873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arihjwaxbkzuoqkfgnrvfhgowpajmvab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678416.2055209-99-266707181390400/AnsiballZ_stat.py'
Jan 29 09:20:16 compute-0 sudo[132873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:16 compute-0 python3.9[132875]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:20:16 compute-0 sudo[132873]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:17 compute-0 sudo[132951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onokvgztomxewjihuuljzzbokxjodgvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678416.2055209-99-266707181390400/AnsiballZ_file.py'
Jan 29 09:20:17 compute-0 sudo[132951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:17 compute-0 ceph-mon[75183]: pgmap v287: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:17 compute-0 python3.9[132953]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:20:17 compute-0 sudo[132951]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:17 compute-0 sudo[133103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmejymsplaxhdbqzfpgkaliwfrbtjxop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678417.4174151-111-170675542094595/AnsiballZ_stat.py'
Jan 29 09:20:17 compute-0 sudo[133103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:17 compute-0 python3.9[133105]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:20:17 compute-0 sudo[133103]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v288: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:18 compute-0 sudo[133181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsomfjtjtebrgnpddcnoiyvfnoghsfju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678417.4174151-111-170675542094595/AnsiballZ_file.py'
Jan 29 09:20:18 compute-0 sudo[133181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:18 compute-0 python3.9[133183]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ejh2lnoi recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:20:18 compute-0 sudo[133181]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:20:18 compute-0 sudo[133333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlngiwlhuztiplpxewotqyygfuxuggus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678418.5015383-123-140734543736701/AnsiballZ_stat.py'
Jan 29 09:20:18 compute-0 sudo[133333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:18 compute-0 python3.9[133335]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:20:18 compute-0 sudo[133333]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:19 compute-0 sudo[133411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znibwpgbvzncwpmlulsrvckvlamazbkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678418.5015383-123-140734543736701/AnsiballZ_file.py'
Jan 29 09:20:19 compute-0 sudo[133411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:19 compute-0 ceph-mon[75183]: pgmap v288: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:19 compute-0 python3.9[133413]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:20:19 compute-0 sudo[133411]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:20 compute-0 sudo[133563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rylxdmkxzhrqxkmzszufxbdbjfytthvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678419.6296518-136-78931807867231/AnsiballZ_command.py'
Jan 29 09:20:20 compute-0 sudo[133563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v289: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:20 compute-0 python3.9[133565]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:20:20 compute-0 sudo[133563]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:20 compute-0 ceph-mon[75183]: pgmap v289: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:20 compute-0 sudo[133716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muouzzksjhnbpxgtsfwoiqybqjabmkst ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769678420.4026115-144-175603896316014/AnsiballZ_edpm_nftables_from_files.py'
Jan 29 09:20:20 compute-0 sudo[133716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:21 compute-0 python3[133718]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 29 09:20:21 compute-0 sudo[133716]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:21 compute-0 sudo[133868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrytapqqtirbgmyryrkqxsphcdzfpeyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678421.1689732-152-137722916752220/AnsiballZ_stat.py'
Jan 29 09:20:21 compute-0 sudo[133868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:21 compute-0 python3.9[133870]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:20:21 compute-0 sudo[133868]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:22 compute-0 sudo[133993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stqvdrrstizkzfxxutbyehwrnlsggfom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678421.1689732-152-137722916752220/AnsiballZ_copy.py'
Jan 29 09:20:22 compute-0 sudo[133993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v290: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:22 compute-0 python3.9[133995]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678421.1689732-152-137722916752220/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:20:22 compute-0 sudo[133993]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:22 compute-0 sudo[134145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkjzbkzecpdivwkjaykpgeissvvljylw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678422.4648864-167-23746586471332/AnsiballZ_stat.py'
Jan 29 09:20:22 compute-0 sudo[134145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:22 compute-0 python3.9[134147]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:20:22 compute-0 sudo[134145]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:23 compute-0 ceph-mon[75183]: pgmap v290: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:23 compute-0 sudo[134270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wezpfmgevfyjklclsuigbadqzgyxofov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678422.4648864-167-23746586471332/AnsiballZ_copy.py'
Jan 29 09:20:23 compute-0 sudo[134270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:23 compute-0 python3.9[134272]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678422.4648864-167-23746586471332/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:20:23 compute-0 sudo[134270]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:20:23 compute-0 sudo[134422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxopniywsdpgxhfafgkpjywfbrkywawb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678423.5607536-182-93286579828710/AnsiballZ_stat.py'
Jan 29 09:20:23 compute-0 sudo[134422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:23 compute-0 python3.9[134424]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:20:24 compute-0 sudo[134422]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v291: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:24 compute-0 sudo[134547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eslposiketzbirieivmleikmdsnmksre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678423.5607536-182-93286579828710/AnsiballZ_copy.py'
Jan 29 09:20:24 compute-0 sudo[134547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:24 compute-0 python3.9[134549]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678423.5607536-182-93286579828710/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:20:24 compute-0 sudo[134547]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:24 compute-0 sudo[134699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hifvlvqhrzuxfjlqrvmgnkohbzpstzwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678424.7211592-197-29429263901362/AnsiballZ_stat.py'
Jan 29 09:20:24 compute-0 sudo[134699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:25 compute-0 python3.9[134701]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:20:25 compute-0 sudo[134699]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:25 compute-0 ceph-mon[75183]: pgmap v291: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:25 compute-0 sudo[134824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iufygizkfjmkophkhmyshngkwnkjojop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678424.7211592-197-29429263901362/AnsiballZ_copy.py'
Jan 29 09:20:25 compute-0 sudo[134824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:25 compute-0 python3.9[134826]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678424.7211592-197-29429263901362/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:20:25 compute-0 sudo[134824]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:26 compute-0 sudo[134976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwmttotmfcahrvoaqnwudvctsuxhjwzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678425.7899687-212-243856019107747/AnsiballZ_stat.py'
Jan 29 09:20:26 compute-0 sudo[134976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v292: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:26 compute-0 python3.9[134978]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:20:26 compute-0 sudo[134976]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:20:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:20:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:20:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:20:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:20:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:20:26 compute-0 sudo[135101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixxierjmhtjjcgzxqxxevlqvtmxcbclb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678425.7899687-212-243856019107747/AnsiballZ_copy.py'
Jan 29 09:20:26 compute-0 sudo[135101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:26 compute-0 python3.9[135103]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678425.7899687-212-243856019107747/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:20:26 compute-0 sudo[135101]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:27 compute-0 ceph-mon[75183]: pgmap v292: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:27 compute-0 sudo[135253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbgjcamkyrivyhdxrtlrmdqzoomzzqfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678427.0119486-227-85188170164825/AnsiballZ_file.py'
Jan 29 09:20:27 compute-0 sudo[135253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:27 compute-0 python3.9[135255]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:20:27 compute-0 sudo[135253]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:27 compute-0 sudo[135405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zycwihozeoexwslbhgrysgdystchdspc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678427.5968556-235-142945099408720/AnsiballZ_command.py'
Jan 29 09:20:27 compute-0 sudo[135405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:28 compute-0 python3.9[135407]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:20:28 compute-0 sudo[135405]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v293: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:28 compute-0 ceph-mon[75183]: pgmap v293: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:28 compute-0 sudo[135560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlvoxglkketlcuwsmnvrfedptwfvjcjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678428.1797602-243-74161175984169/AnsiballZ_blockinfile.py'
Jan 29 09:20:28 compute-0 sudo[135560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:20:28 compute-0 python3.9[135562]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:20:28 compute-0 sudo[135560]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:29 compute-0 sudo[135712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idtqhjpuchpxjnigmqzmtojyrcrdwkkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678428.9735653-252-175787584607240/AnsiballZ_command.py'
Jan 29 09:20:29 compute-0 sudo[135712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:29 compute-0 python3.9[135714]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:20:29 compute-0 sudo[135712]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:29 compute-0 sudo[135865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhrgelpkrxlyckuznykawcpgcxnfmsre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678429.573578-260-110110357746005/AnsiballZ_stat.py'
Jan 29 09:20:29 compute-0 sudo[135865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:30 compute-0 python3.9[135867]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:20:30 compute-0 sudo[135865]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v294: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:30 compute-0 sudo[136019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzagrtoeedpazchtfnlqcpygtxohcrdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678430.2355905-268-9840977854449/AnsiballZ_command.py'
Jan 29 09:20:30 compute-0 sudo[136019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:30 compute-0 python3.9[136021]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:20:30 compute-0 sudo[136019]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:31 compute-0 sudo[136174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckiffzyfpopmejfblknglqqkwrlbzdof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678430.903419-276-265164976027680/AnsiballZ_file.py'
Jan 29 09:20:31 compute-0 sudo[136174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:31 compute-0 ceph-mon[75183]: pgmap v294: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:31 compute-0 python3.9[136176]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:20:31 compute-0 sudo[136174]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v295: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:32 compute-0 python3.9[136326]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:20:33 compute-0 sudo[136477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srwwgtxncviaxlwnozaobdlwzappwemh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678432.916074-316-77695871834920/AnsiballZ_command.py'
Jan 29 09:20:33 compute-0 sudo[136477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:33 compute-0 ceph-mon[75183]: pgmap v295: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:33 compute-0 python3.9[136479]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:9e:41:65:cf" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:20:33 compute-0 ovs-vsctl[136480]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:9e:41:65:cf external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 29 09:20:33 compute-0 sudo[136477]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:20:33 compute-0 sudo[136630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anbrmiohrmmpepgrgbvssttviyrbgjdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678433.5575073-325-205767004267383/AnsiballZ_command.py'
Jan 29 09:20:33 compute-0 sudo[136630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:34 compute-0 python3.9[136632]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:20:34 compute-0 sudo[136630]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v296: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:34 compute-0 sudo[136785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sapyawpgsomhamlicieodtrijdencfnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678434.3477786-333-109951594811320/AnsiballZ_command.py'
Jan 29 09:20:34 compute-0 sudo[136785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:34 compute-0 python3.9[136787]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:20:34 compute-0 ovs-vsctl[136788]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 29 09:20:34 compute-0 sudo[136785]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:35 compute-0 ceph-mon[75183]: pgmap v296: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:35 compute-0 python3.9[136938]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:20:35 compute-0 sudo[137090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkqjqzppthrfsqsnajbykdjtzjerilay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678435.582662-350-15513102840982/AnsiballZ_file.py'
Jan 29 09:20:35 compute-0 sudo[137090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:36 compute-0 python3.9[137092]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:20:36 compute-0 sudo[137090]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v297: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:36 compute-0 ceph-mon[75183]: pgmap v297: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:36 compute-0 sudo[137242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txgaiydqvegjcrbowoswsxdsbzvxnefp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678436.1972811-358-78428253596874/AnsiballZ_stat.py'
Jan 29 09:20:36 compute-0 sudo[137242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:36 compute-0 python3.9[137244]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:20:36 compute-0 sudo[137242]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:36 compute-0 sudo[137320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeytxuuuqumnogorqvmzvbcnpqloummu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678436.1972811-358-78428253596874/AnsiballZ_file.py'
Jan 29 09:20:36 compute-0 sudo[137320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:37 compute-0 python3.9[137322]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:20:37 compute-0 sudo[137320]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:37 compute-0 sudo[137472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frmpchlopgetkjpfifnvocgampllsrkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678437.1718042-358-168952263855899/AnsiballZ_stat.py'
Jan 29 09:20:37 compute-0 sudo[137472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:37 compute-0 python3.9[137474]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:20:37 compute-0 sudo[137472]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:37 compute-0 sudo[137550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzvwwrxgegdjiigqcvziaxpklkqpxnle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678437.1718042-358-168952263855899/AnsiballZ_file.py'
Jan 29 09:20:37 compute-0 sudo[137550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:38 compute-0 python3.9[137552]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:20:38 compute-0 sudo[137550]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v298: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:38 compute-0 sudo[137702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbseiqjfbjvrhtggylnkzwyleenxajav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678438.2065215-381-86339415557487/AnsiballZ_file.py'
Jan 29 09:20:38 compute-0 sudo[137702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:38 compute-0 ceph-mon[75183]: pgmap v298: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:20:38 compute-0 python3.9[137704]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:20:38 compute-0 sudo[137702]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:39 compute-0 sudo[137854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nofxgynfelskxdfshaaiegkghubxigof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678439.0057054-389-246694304329704/AnsiballZ_stat.py'
Jan 29 09:20:39 compute-0 sudo[137854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:39 compute-0 python3.9[137856]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:20:39 compute-0 sudo[137854]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:39 compute-0 sudo[137932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhwfbkgpvmfqvmzbpttylwbjchriacws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678439.0057054-389-246694304329704/AnsiballZ_file.py'
Jan 29 09:20:39 compute-0 sudo[137932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:39 compute-0 python3.9[137934]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:20:39 compute-0 sudo[137932]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v299: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:40 compute-0 sudo[138084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoikxlahyfetlyupswnlnmhqyjmzdmoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678440.0628257-401-24216957335178/AnsiballZ_stat.py'
Jan 29 09:20:40 compute-0 sudo[138084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:40 compute-0 python3.9[138086]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:20:40 compute-0 sudo[138084]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:40 compute-0 sudo[138162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frntedgruxbdplyrfgddhetovbrvieju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678440.0628257-401-24216957335178/AnsiballZ_file.py'
Jan 29 09:20:40 compute-0 sudo[138162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:40 compute-0 python3.9[138164]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:20:40 compute-0 sudo[138162]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:41 compute-0 sudo[138314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-befefcytoagrvoyzdqmnejxzljhpqfhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678441.0857315-413-148956903897533/AnsiballZ_systemd.py'
Jan 29 09:20:41 compute-0 sudo[138314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:41 compute-0 ceph-mon[75183]: pgmap v299: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:41 compute-0 python3.9[138316]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:20:41 compute-0 systemd[1]: Reloading.
Jan 29 09:20:41 compute-0 systemd-rc-local-generator[138345]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:20:41 compute-0 systemd-sysv-generator[138348]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:20:41 compute-0 sudo[138314]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v300: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:42 compute-0 sudo[138504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgwczpntdupmhxpxrvzhmlkxevqruvuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678442.1140113-421-115725906791538/AnsiballZ_stat.py'
Jan 29 09:20:42 compute-0 sudo[138504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:42 compute-0 ceph-mon[75183]: pgmap v300: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:42 compute-0 python3.9[138506]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:20:42 compute-0 sudo[138504]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:42 compute-0 sudo[138582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukojwevwinkkuvwmpsdivhyhykuqgqgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678442.1140113-421-115725906791538/AnsiballZ_file.py'
Jan 29 09:20:42 compute-0 sudo[138582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:42 compute-0 python3.9[138584]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:20:42 compute-0 sudo[138582]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:43 compute-0 sudo[138734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvuxrsgrdtlruwmkoxpdnzgxurzsxglz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678443.1065533-433-253648861114653/AnsiballZ_stat.py'
Jan 29 09:20:43 compute-0 sudo[138734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:43 compute-0 python3.9[138736]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:20:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:20:43 compute-0 sudo[138734]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:43 compute-0 sudo[138812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdwhhpybiaqfdtvlcsjqveibbtvnjijt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678443.1065533-433-253648861114653/AnsiballZ_file.py'
Jan 29 09:20:43 compute-0 sudo[138812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v301: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:44 compute-0 python3.9[138814]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:20:44 compute-0 sudo[138812]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:44 compute-0 ceph-mon[75183]: pgmap v301: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:44 compute-0 sudo[138964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekajwgcvnjaleliuseznnzqbzujsjxku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678444.2979429-445-102403267257079/AnsiballZ_systemd.py'
Jan 29 09:20:44 compute-0 sudo[138964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:44 compute-0 python3.9[138966]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:20:44 compute-0 systemd[1]: Reloading.
Jan 29 09:20:44 compute-0 systemd-sysv-generator[138994]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:20:44 compute-0 systemd-rc-local-generator[138991]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:20:45 compute-0 systemd[1]: Starting Create netns directory...
Jan 29 09:20:45 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 29 09:20:45 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 29 09:20:45 compute-0 systemd[1]: Finished Create netns directory.
Jan 29 09:20:45 compute-0 sudo[138964]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:45 compute-0 sudo[139156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qabpxkmohdfwcamukxwjypnuutyxaose ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678445.4576194-455-211065503972294/AnsiballZ_file.py'
Jan 29 09:20:45 compute-0 sudo[139156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:45 compute-0 python3.9[139158]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:20:45 compute-0 sudo[139156]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v302: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:46 compute-0 sudo[139308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-engxbdpioktycjngcdqgvxyefvgiqjzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678446.0296865-463-15703841200871/AnsiballZ_stat.py'
Jan 29 09:20:46 compute-0 sudo[139308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:46 compute-0 python3.9[139310]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:20:46 compute-0 sudo[139308]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:46 compute-0 sudo[139431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kotktrxwtpaojfhxjmrhahytiupallsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678446.0296865-463-15703841200871/AnsiballZ_copy.py'
Jan 29 09:20:46 compute-0 sudo[139431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:46 compute-0 python3.9[139433]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769678446.0296865-463-15703841200871/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:20:47 compute-0 sudo[139431]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:47 compute-0 ceph-mon[75183]: pgmap v302: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:47 compute-0 sudo[139583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-debicegwpqwjyactenlsmtiofgahvglz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678447.335848-480-242773657331561/AnsiballZ_file.py'
Jan 29 09:20:47 compute-0 sudo[139583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:47 compute-0 python3.9[139585]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:20:47 compute-0 sudo[139583]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v303: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:48 compute-0 sudo[139735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtlsjuboaznqakmxrmdtdilznakgynqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678447.9394152-488-276182327125583/AnsiballZ_file.py'
Jan 29 09:20:48 compute-0 sudo[139735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:48 compute-0 python3.9[139737]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:20:48 compute-0 sudo[139735]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:20:48.767748) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678448767785, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 728, "num_deletes": 252, "total_data_size": 644504, "memory_usage": 658480, "flush_reason": "Manual Compaction"}
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678448779434, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 418852, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6717, "largest_seqno": 7444, "table_properties": {"data_size": 415652, "index_size": 1044, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7951, "raw_average_key_size": 19, "raw_value_size": 409039, "raw_average_value_size": 997, "num_data_blocks": 49, "num_entries": 410, "num_filter_entries": 410, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769678385, "oldest_key_time": 1769678385, "file_creation_time": 1769678448, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 11757 microseconds, and 2610 cpu microseconds.
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:20:48.779499) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 418852 bytes OK
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:20:48.779524) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:20:48.781395) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:20:48.781474) EVENT_LOG_v1 {"time_micros": 1769678448781463, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:20:48.781522) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 640764, prev total WAL file size 640764, number of live WAL files 2.
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:20:48.782156) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(409KB)], [20(5317KB)]
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678448782215, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 5864236, "oldest_snapshot_seqno": -1}
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 2580 keys, 4248174 bytes, temperature: kUnknown
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678448815918, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 4248174, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4229592, "index_size": 11049, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6469, "raw_key_size": 60156, "raw_average_key_size": 23, "raw_value_size": 4181937, "raw_average_value_size": 1620, "num_data_blocks": 502, "num_entries": 2580, "num_filter_entries": 2580, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677896, "oldest_key_time": 0, "file_creation_time": 1769678448, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:20:48.816120) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 4248174 bytes
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:20:48.817312) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.7 rd, 125.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 5.2 +0.0 blob) out(4.1 +0.0 blob), read-write-amplify(24.1) write-amplify(10.1) OK, records in: 3069, records dropped: 489 output_compression: NoCompression
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:20:48.817334) EVENT_LOG_v1 {"time_micros": 1769678448817323, "job": 6, "event": "compaction_finished", "compaction_time_micros": 33767, "compaction_time_cpu_micros": 12299, "output_level": 6, "num_output_files": 1, "total_output_size": 4248174, "num_input_records": 3069, "num_output_records": 2580, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678448817457, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678448818015, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:20:48.782039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:20:48.818065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:20:48.818070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:20:48.818072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:20:48.818074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:20:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:20:48.818076) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:20:48 compute-0 sudo[139887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxiqoambgsqjfymogukmnldnhcanpwmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678448.708578-496-158388352180903/AnsiballZ_stat.py'
Jan 29 09:20:48 compute-0 sudo[139887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:49 compute-0 python3.9[139889]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:20:49 compute-0 sudo[139887]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:49 compute-0 ceph-mon[75183]: pgmap v303: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:49 compute-0 sudo[140010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcfwupcnfwilorgnqwxjsmoywkthkmpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678448.708578-496-158388352180903/AnsiballZ_copy.py'
Jan 29 09:20:49 compute-0 sudo[140010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:49 compute-0 python3.9[140012]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769678448.708578-496-158388352180903/.source.json _original_basename=.osrd4nae follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:20:49 compute-0 sudo[140010]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v304: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:50 compute-0 python3.9[140162]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:20:50 compute-0 ceph-mon[75183]: pgmap v304: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:51 compute-0 sudo[140434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:20:51 compute-0 sudo[140434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:20:51 compute-0 sudo[140434]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:51 compute-0 sudo[140459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:20:51 compute-0 sudo[140459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:20:51 compute-0 sudo[140459]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:20:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:20:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:20:51 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:20:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:20:51 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:20:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:20:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:20:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:20:51 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:20:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:20:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:20:51 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:20:51 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:20:51 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:20:51 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:20:51 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:20:51 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:20:51 compute-0 sudo[140591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:20:51 compute-0 sudo[140591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:20:51 compute-0 sudo[140591]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:51 compute-0 sudo[140619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:20:51 compute-0 sudo[140619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:20:51 compute-0 sudo[140714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbcxlrdreelgencdzmfqglhfefijnknz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678451.5272644-536-141646775094470/AnsiballZ_container_config_data.py'
Jan 29 09:20:51 compute-0 sudo[140714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:52 compute-0 podman[140730]: 2026-01-29 09:20:52.074691995 +0000 UTC m=+0.056006260 container create 3b9993ac665c597b9a8d4d527f20585bda23aa22f977d9a09d6728dac53b245d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_torvalds, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 29 09:20:52 compute-0 systemd[1]: Started libpod-conmon-3b9993ac665c597b9a8d4d527f20585bda23aa22f977d9a09d6728dac53b245d.scope.
Jan 29 09:20:52 compute-0 python3.9[140716]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 29 09:20:52 compute-0 podman[140730]: 2026-01-29 09:20:52.041183726 +0000 UTC m=+0.022498011 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:20:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v305: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:52 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:20:52 compute-0 sudo[140714]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:52 compute-0 podman[140730]: 2026-01-29 09:20:52.173947927 +0000 UTC m=+0.155262222 container init 3b9993ac665c597b9a8d4d527f20585bda23aa22f977d9a09d6728dac53b245d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:20:52 compute-0 podman[140730]: 2026-01-29 09:20:52.185656484 +0000 UTC m=+0.166970749 container start 3b9993ac665c597b9a8d4d527f20585bda23aa22f977d9a09d6728dac53b245d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_torvalds, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:20:52 compute-0 intelligent_torvalds[140746]: 167 167
Jan 29 09:20:52 compute-0 systemd[1]: libpod-3b9993ac665c597b9a8d4d527f20585bda23aa22f977d9a09d6728dac53b245d.scope: Deactivated successfully.
Jan 29 09:20:52 compute-0 podman[140730]: 2026-01-29 09:20:52.191995516 +0000 UTC m=+0.173309791 container attach 3b9993ac665c597b9a8d4d527f20585bda23aa22f977d9a09d6728dac53b245d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_torvalds, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:20:52 compute-0 podman[140730]: 2026-01-29 09:20:52.192825719 +0000 UTC m=+0.174139984 container died 3b9993ac665c597b9a8d4d527f20585bda23aa22f977d9a09d6728dac53b245d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 29 09:20:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f31aca86d1f6834a33d8fca53b86d6e46073f926246eb50528b55999f9154ecc-merged.mount: Deactivated successfully.
Jan 29 09:20:52 compute-0 podman[140730]: 2026-01-29 09:20:52.239632468 +0000 UTC m=+0.220946733 container remove 3b9993ac665c597b9a8d4d527f20585bda23aa22f977d9a09d6728dac53b245d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_torvalds, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:20:52 compute-0 systemd[1]: libpod-conmon-3b9993ac665c597b9a8d4d527f20585bda23aa22f977d9a09d6728dac53b245d.scope: Deactivated successfully.
Jan 29 09:20:52 compute-0 podman[140794]: 2026-01-29 09:20:52.365288407 +0000 UTC m=+0.041191919 container create 992718ee01da41c2804175a3bbb3868da424160e5b7d43524f3a00d592291d71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shaw, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:20:52 compute-0 systemd[1]: Started libpod-conmon-992718ee01da41c2804175a3bbb3868da424160e5b7d43524f3a00d592291d71.scope.
Jan 29 09:20:52 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:20:52 compute-0 podman[140794]: 2026-01-29 09:20:52.347473134 +0000 UTC m=+0.023376666 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7595cc95472c59b9ebdd97af05cd5e96232ce5aff7ae0af91a8d16449a7578a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7595cc95472c59b9ebdd97af05cd5e96232ce5aff7ae0af91a8d16449a7578a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7595cc95472c59b9ebdd97af05cd5e96232ce5aff7ae0af91a8d16449a7578a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7595cc95472c59b9ebdd97af05cd5e96232ce5aff7ae0af91a8d16449a7578a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7595cc95472c59b9ebdd97af05cd5e96232ce5aff7ae0af91a8d16449a7578a2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:20:52 compute-0 podman[140794]: 2026-01-29 09:20:52.462590846 +0000 UTC m=+0.138494378 container init 992718ee01da41c2804175a3bbb3868da424160e5b7d43524f3a00d592291d71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:20:52 compute-0 podman[140794]: 2026-01-29 09:20:52.469667038 +0000 UTC m=+0.145570550 container start 992718ee01da41c2804175a3bbb3868da424160e5b7d43524f3a00d592291d71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shaw, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:20:52 compute-0 podman[140794]: 2026-01-29 09:20:52.475975799 +0000 UTC m=+0.151879341 container attach 992718ee01da41c2804175a3bbb3868da424160e5b7d43524f3a00d592291d71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shaw, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:20:52 compute-0 ceph-mon[75183]: pgmap v305: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:52 compute-0 sudo[140953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndeqnkjqrpchmybkjlftslpadfvwfpgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678452.4406643-547-230721404801351/AnsiballZ_container_config_hash.py'
Jan 29 09:20:52 compute-0 sudo[140953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:52 compute-0 magical_shaw[140811]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:20:52 compute-0 magical_shaw[140811]: --> All data devices are unavailable
Jan 29 09:20:52 compute-0 systemd[1]: libpod-992718ee01da41c2804175a3bbb3868da424160e5b7d43524f3a00d592291d71.scope: Deactivated successfully.
Jan 29 09:20:52 compute-0 podman[140794]: 2026-01-29 09:20:52.943587592 +0000 UTC m=+0.619491105 container died 992718ee01da41c2804175a3bbb3868da424160e5b7d43524f3a00d592291d71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shaw, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 09:20:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7595cc95472c59b9ebdd97af05cd5e96232ce5aff7ae0af91a8d16449a7578a2-merged.mount: Deactivated successfully.
Jan 29 09:20:53 compute-0 podman[140794]: 2026-01-29 09:20:53.014341432 +0000 UTC m=+0.690244934 container remove 992718ee01da41c2804175a3bbb3868da424160e5b7d43524f3a00d592291d71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:20:53 compute-0 systemd[1]: libpod-conmon-992718ee01da41c2804175a3bbb3868da424160e5b7d43524f3a00d592291d71.scope: Deactivated successfully.
Jan 29 09:20:53 compute-0 sudo[140619]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:53 compute-0 python3.9[140955]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 29 09:20:53 compute-0 sudo[140953]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:53 compute-0 sudo[140971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:20:53 compute-0 sudo[140971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:20:53 compute-0 sudo[140971]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:53 compute-0 sudo[141008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:20:53 compute-0 sudo[141008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:20:53 compute-0 podman[141098]: 2026-01-29 09:20:53.447654125 +0000 UTC m=+0.039175514 container create b380fbe99b03769612e0765f4279454be40bf573122f3d4a78b5de35e20cabe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_khayyam, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:20:53 compute-0 systemd[1]: Started libpod-conmon-b380fbe99b03769612e0765f4279454be40bf573122f3d4a78b5de35e20cabe8.scope.
Jan 29 09:20:53 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:20:53 compute-0 podman[141098]: 2026-01-29 09:20:53.430053587 +0000 UTC m=+0.021574986 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:20:53 compute-0 podman[141098]: 2026-01-29 09:20:53.526453932 +0000 UTC m=+0.117975341 container init b380fbe99b03769612e0765f4279454be40bf573122f3d4a78b5de35e20cabe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_khayyam, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:20:53 compute-0 podman[141098]: 2026-01-29 09:20:53.535056515 +0000 UTC m=+0.126577894 container start b380fbe99b03769612e0765f4279454be40bf573122f3d4a78b5de35e20cabe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 09:20:53 compute-0 brave_khayyam[141125]: 167 167
Jan 29 09:20:53 compute-0 systemd[1]: libpod-b380fbe99b03769612e0765f4279454be40bf573122f3d4a78b5de35e20cabe8.scope: Deactivated successfully.
Jan 29 09:20:53 compute-0 podman[141098]: 2026-01-29 09:20:53.539845465 +0000 UTC m=+0.131366904 container attach b380fbe99b03769612e0765f4279454be40bf573122f3d4a78b5de35e20cabe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_khayyam, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 29 09:20:53 compute-0 podman[141098]: 2026-01-29 09:20:53.543211947 +0000 UTC m=+0.134733336 container died b380fbe99b03769612e0765f4279454be40bf573122f3d4a78b5de35e20cabe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:20:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ec93f657848a1ae6f9494742a11bab52bea27e8179413d4407de522f828ea74-merged.mount: Deactivated successfully.
Jan 29 09:20:53 compute-0 podman[141098]: 2026-01-29 09:20:53.588455954 +0000 UTC m=+0.179977323 container remove b380fbe99b03769612e0765f4279454be40bf573122f3d4a78b5de35e20cabe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:20:53 compute-0 systemd[1]: libpod-conmon-b380fbe99b03769612e0765f4279454be40bf573122f3d4a78b5de35e20cabe8.scope: Deactivated successfully.
Jan 29 09:20:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:20:53 compute-0 podman[141150]: 2026-01-29 09:20:53.786214588 +0000 UTC m=+0.105017880 container create bc430d407bccba77030d7cf55037ca3c3b564277cb85ebf07e0946fc2b80f535 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_diffie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 09:20:53 compute-0 podman[141150]: 2026-01-29 09:20:53.70849092 +0000 UTC m=+0.027294232 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:20:53 compute-0 systemd[1]: Started libpod-conmon-bc430d407bccba77030d7cf55037ca3c3b564277cb85ebf07e0946fc2b80f535.scope.
Jan 29 09:20:53 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:20:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93825a6cc2dfb298ae2d12fe83e8264b186c5934d2f43d53622723424ad507c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:20:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93825a6cc2dfb298ae2d12fe83e8264b186c5934d2f43d53622723424ad507c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:20:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93825a6cc2dfb298ae2d12fe83e8264b186c5934d2f43d53622723424ad507c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:20:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93825a6cc2dfb298ae2d12fe83e8264b186c5934d2f43d53622723424ad507c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:20:53 compute-0 podman[141150]: 2026-01-29 09:20:53.891867424 +0000 UTC m=+0.210670736 container init bc430d407bccba77030d7cf55037ca3c3b564277cb85ebf07e0946fc2b80f535 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_diffie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 29 09:20:53 compute-0 sudo[141242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvwkcagzeeohjgphxhtjxzskxhqqdafu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769678453.355892-557-218046794366844/AnsiballZ_edpm_container_manage.py'
Jan 29 09:20:53 compute-0 sudo[141242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:53 compute-0 podman[141150]: 2026-01-29 09:20:53.898566725 +0000 UTC m=+0.217370007 container start bc430d407bccba77030d7cf55037ca3c3b564277cb85ebf07e0946fc2b80f535 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_diffie, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 29 09:20:53 compute-0 podman[141150]: 2026-01-29 09:20:53.903707405 +0000 UTC m=+0.222510697 container attach bc430d407bccba77030d7cf55037ca3c3b564277cb85ebf07e0946fc2b80f535 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:20:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v306: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:54 compute-0 python3[141245]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 29 09:20:54 compute-0 amazing_diffie[141213]: {
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:     "0": [
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:         {
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "devices": [
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "/dev/loop3"
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             ],
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "lv_name": "ceph_lv0",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "lv_size": "21470642176",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "name": "ceph_lv0",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "tags": {
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.cluster_name": "ceph",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.crush_device_class": "",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.encrypted": "0",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.objectstore": "bluestore",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.osd_id": "0",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.type": "block",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.vdo": "0",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.with_tpm": "0"
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             },
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "type": "block",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "vg_name": "ceph_vg0"
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:         }
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:     ],
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:     "1": [
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:         {
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "devices": [
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "/dev/loop4"
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             ],
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "lv_name": "ceph_lv1",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "lv_size": "21470642176",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "name": "ceph_lv1",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "tags": {
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.cluster_name": "ceph",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.crush_device_class": "",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.encrypted": "0",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.objectstore": "bluestore",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.osd_id": "1",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.type": "block",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.vdo": "0",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.with_tpm": "0"
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             },
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "type": "block",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "vg_name": "ceph_vg1"
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:         }
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:     ],
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:     "2": [
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:         {
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "devices": [
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "/dev/loop5"
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             ],
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "lv_name": "ceph_lv2",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "lv_size": "21470642176",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "name": "ceph_lv2",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "tags": {
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.cluster_name": "ceph",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.crush_device_class": "",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.encrypted": "0",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.objectstore": "bluestore",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.osd_id": "2",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.type": "block",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.vdo": "0",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:                 "ceph.with_tpm": "0"
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             },
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "type": "block",
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:             "vg_name": "ceph_vg2"
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:         }
Jan 29 09:20:54 compute-0 amazing_diffie[141213]:     ]
Jan 29 09:20:54 compute-0 amazing_diffie[141213]: }
Jan 29 09:20:54 compute-0 systemd[1]: libpod-bc430d407bccba77030d7cf55037ca3c3b564277cb85ebf07e0946fc2b80f535.scope: Deactivated successfully.
Jan 29 09:20:54 compute-0 podman[141150]: 2026-01-29 09:20:54.2176319 +0000 UTC m=+0.536435202 container died bc430d407bccba77030d7cf55037ca3c3b564277cb85ebf07e0946fc2b80f535 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_diffie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:20:54 compute-0 ceph-mon[75183]: pgmap v306: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-93825a6cc2dfb298ae2d12fe83e8264b186c5934d2f43d53622723424ad507c3-merged.mount: Deactivated successfully.
Jan 29 09:20:54 compute-0 podman[141150]: 2026-01-29 09:20:54.378540194 +0000 UTC m=+0.697343486 container remove bc430d407bccba77030d7cf55037ca3c3b564277cb85ebf07e0946fc2b80f535 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:20:54 compute-0 systemd[1]: libpod-conmon-bc430d407bccba77030d7cf55037ca3c3b564277cb85ebf07e0946fc2b80f535.scope: Deactivated successfully.
Jan 29 09:20:54 compute-0 sudo[141008]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:54 compute-0 sudo[141287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:20:54 compute-0 sudo[141287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:20:54 compute-0 sudo[141287]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:54 compute-0 sudo[141312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:20:54 compute-0 sudo[141312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:20:54 compute-0 podman[141348]: 2026-01-29 09:20:54.87794776 +0000 UTC m=+0.040865659 container create 0241e4c60fdbf8253ecefeee425e358becdfad973da49f869edfb174f7c56036 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 09:20:54 compute-0 systemd[1]: Started libpod-conmon-0241e4c60fdbf8253ecefeee425e358becdfad973da49f869edfb174f7c56036.scope.
Jan 29 09:20:54 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:20:54 compute-0 podman[141348]: 2026-01-29 09:20:54.859242013 +0000 UTC m=+0.022159962 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:20:54 compute-0 podman[141348]: 2026-01-29 09:20:54.96567004 +0000 UTC m=+0.128587959 container init 0241e4c60fdbf8253ecefeee425e358becdfad973da49f869edfb174f7c56036 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_murdock, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:20:54 compute-0 podman[141348]: 2026-01-29 09:20:54.975040274 +0000 UTC m=+0.137958173 container start 0241e4c60fdbf8253ecefeee425e358becdfad973da49f869edfb174f7c56036 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 29 09:20:54 compute-0 distracted_murdock[141369]: 167 167
Jan 29 09:20:54 compute-0 systemd[1]: libpod-0241e4c60fdbf8253ecefeee425e358becdfad973da49f869edfb174f7c56036.scope: Deactivated successfully.
Jan 29 09:20:54 compute-0 podman[141348]: 2026-01-29 09:20:54.979069663 +0000 UTC m=+0.141987592 container attach 0241e4c60fdbf8253ecefeee425e358becdfad973da49f869edfb174f7c56036 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 09:20:54 compute-0 podman[141348]: 2026-01-29 09:20:54.980302786 +0000 UTC m=+0.143220685 container died 0241e4c60fdbf8253ecefeee425e358becdfad973da49f869edfb174f7c56036 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_murdock, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Jan 29 09:20:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f22ed69deb4b276a0e0e117cbf20c79c9006b5dd6b81e4e6e4abe4bcf04e1d8-merged.mount: Deactivated successfully.
Jan 29 09:20:55 compute-0 podman[141348]: 2026-01-29 09:20:55.034461805 +0000 UTC m=+0.197379704 container remove 0241e4c60fdbf8253ecefeee425e358becdfad973da49f869edfb174f7c56036 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_murdock, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:20:55 compute-0 systemd[1]: libpod-conmon-0241e4c60fdbf8253ecefeee425e358becdfad973da49f869edfb174f7c56036.scope: Deactivated successfully.
Jan 29 09:20:55 compute-0 podman[141408]: 2026-01-29 09:20:55.217703306 +0000 UTC m=+0.055188518 container create dda3f469d3798b08d1f8f6872334917bbc5e607f697e0b8ec8ce9bcc42537966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_neumann, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 29 09:20:55 compute-0 systemd[1]: Started libpod-conmon-dda3f469d3798b08d1f8f6872334917bbc5e607f697e0b8ec8ce9bcc42537966.scope.
Jan 29 09:20:55 compute-0 podman[141408]: 2026-01-29 09:20:55.194597679 +0000 UTC m=+0.032082911 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:20:55 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:20:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a588f25bda6a2e9b05f7d37bbc71e2eb4fe44aef949c11647ef3197624713b2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:20:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a588f25bda6a2e9b05f7d37bbc71e2eb4fe44aef949c11647ef3197624713b2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:20:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a588f25bda6a2e9b05f7d37bbc71e2eb4fe44aef949c11647ef3197624713b2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:20:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a588f25bda6a2e9b05f7d37bbc71e2eb4fe44aef949c11647ef3197624713b2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:20:55 compute-0 podman[141408]: 2026-01-29 09:20:55.340827965 +0000 UTC m=+0.178313177 container init dda3f469d3798b08d1f8f6872334917bbc5e607f697e0b8ec8ce9bcc42537966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_neumann, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 29 09:20:55 compute-0 podman[141408]: 2026-01-29 09:20:55.355928275 +0000 UTC m=+0.193413487 container start dda3f469d3798b08d1f8f6872334917bbc5e607f697e0b8ec8ce9bcc42537966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_neumann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:20:55 compute-0 podman[141408]: 2026-01-29 09:20:55.359947564 +0000 UTC m=+0.197432796 container attach dda3f469d3798b08d1f8f6872334917bbc5e607f697e0b8ec8ce9bcc42537966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_neumann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:20:55 compute-0 lvm[141501]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:20:55 compute-0 lvm[141501]: VG ceph_vg0 finished
Jan 29 09:20:55 compute-0 lvm[141504]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:20:55 compute-0 lvm[141504]: VG ceph_vg1 finished
Jan 29 09:20:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:20:55
Jan 29 09:20:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:20:55 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:20:55 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'backups', '.mgr', 'images', 'vms']
Jan 29 09:20:55 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:20:55 compute-0 lvm[141506]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:20:55 compute-0 lvm[141506]: VG ceph_vg2 finished
Jan 29 09:20:55 compute-0 lvm[141507]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:20:55 compute-0 lvm[141507]: VG ceph_vg0 finished
Jan 29 09:20:56 compute-0 interesting_neumann[141425]: {}
Jan 29 09:20:56 compute-0 systemd[1]: libpod-dda3f469d3798b08d1f8f6872334917bbc5e607f697e0b8ec8ce9bcc42537966.scope: Deactivated successfully.
Jan 29 09:20:56 compute-0 podman[141408]: 2026-01-29 09:20:56.103992045 +0000 UTC m=+0.941477257 container died dda3f469d3798b08d1f8f6872334917bbc5e607f697e0b8ec8ce9bcc42537966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:20:56 compute-0 systemd[1]: libpod-dda3f469d3798b08d1f8f6872334917bbc5e607f697e0b8ec8ce9bcc42537966.scope: Consumed 1.048s CPU time.
Jan 29 09:20:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v307: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:20:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:20:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:20:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:20:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:20:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:20:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:20:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:20:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:20:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:20:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:20:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:20:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:20:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:20:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:20:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:20:57 compute-0 ceph-mon[75183]: pgmap v307: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a588f25bda6a2e9b05f7d37bbc71e2eb4fe44aef949c11647ef3197624713b2f-merged.mount: Deactivated successfully.
Jan 29 09:20:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v308: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:58 compute-0 ceph-mon[75183]: pgmap v308: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:20:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:20:58 compute-0 podman[141408]: 2026-01-29 09:20:58.777254144 +0000 UTC m=+3.614739356 container remove dda3f469d3798b08d1f8f6872334917bbc5e607f697e0b8ec8ce9bcc42537966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 09:20:58 compute-0 systemd[1]: libpod-conmon-dda3f469d3798b08d1f8f6872334917bbc5e607f697e0b8ec8ce9bcc42537966.scope: Deactivated successfully.
Jan 29 09:20:58 compute-0 podman[141262]: 2026-01-29 09:20:58.811588585 +0000 UTC m=+4.574418246 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 29 09:20:58 compute-0 sudo[141312]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:20:58 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:20:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:20:58 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:20:58 compute-0 sudo[141600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:20:58 compute-0 sudo[141600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:20:58 compute-0 sudo[141600]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:58 compute-0 podman[141634]: 2026-01-29 09:20:58.941383405 +0000 UTC m=+0.047252712 container create 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Jan 29 09:20:58 compute-0 podman[141634]: 2026-01-29 09:20:58.917446786 +0000 UTC m=+0.023316113 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 29 09:20:58 compute-0 python3[141245]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 29 09:20:59 compute-0 sudo[141242]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:59 compute-0 sudo[141825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otsoozovgbmnvipopzhbtquhjdmpucxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678459.199888-565-49614122824898/AnsiballZ_stat.py'
Jan 29 09:20:59 compute-0 sudo[141825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:20:59 compute-0 python3.9[141827]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:20:59 compute-0 sudo[141825]: pam_unix(sudo:session): session closed for user root
Jan 29 09:20:59 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:20:59 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:21:00 compute-0 sudo[141979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjsndfsvdutskfnvunicnmneicranheg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678459.8240297-574-212927526766440/AnsiballZ_file.py'
Jan 29 09:21:00 compute-0 sudo[141979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v309: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:00 compute-0 python3.9[141981]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:21:00 compute-0 sudo[141979]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:00 compute-0 sudo[142055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzzsbhnsyifyldhezmjyblcfsmnyqiyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678459.8240297-574-212927526766440/AnsiballZ_stat.py'
Jan 29 09:21:00 compute-0 sudo[142055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:00 compute-0 python3.9[142057]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:21:00 compute-0 sudo[142055]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:00 compute-0 ceph-mon[75183]: pgmap v309: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:01 compute-0 sudo[142206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lttoeurpmvtkfdjlaeduagekjuktxlqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678460.66945-574-154622772851966/AnsiballZ_copy.py'
Jan 29 09:21:01 compute-0 sudo[142206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:01 compute-0 python3.9[142208]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769678460.66945-574-154622772851966/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:21:01 compute-0 sudo[142206]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:01 compute-0 sudo[142282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpihukymphtschjwpnnavfzdateghlyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678460.66945-574-154622772851966/AnsiballZ_systemd.py'
Jan 29 09:21:01 compute-0 sudo[142282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:21:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:21:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:21:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:21:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:21:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:21:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:21:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:21:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:21:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:21:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:21:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:21:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0578630957479565e-06 of space, bias 4.0, pg target 0.0012694357148975478 quantized to 16 (current 32)
Jan 29 09:21:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:21:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:21:01 compute-0 python3.9[142284]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 29 09:21:01 compute-0 systemd[1]: Reloading.
Jan 29 09:21:01 compute-0 systemd-rc-local-generator[142302]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:21:01 compute-0 systemd-sysv-generator[142312]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:21:02 compute-0 sudo[142282]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v310: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:02 compute-0 sudo[142393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lusmmdtsfqvhqzudttlyjieiazbcqzlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678460.66945-574-154622772851966/AnsiballZ_systemd.py'
Jan 29 09:21:02 compute-0 sudo[142393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:02 compute-0 python3.9[142395]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:21:02 compute-0 systemd[1]: Reloading.
Jan 29 09:21:02 compute-0 systemd-rc-local-generator[142425]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:21:02 compute-0 systemd-sysv-generator[142428]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:21:02 compute-0 systemd[1]: Starting ovn_controller container...
Jan 29 09:21:02 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79f652d2df8a1856667d3535ff6a76b5520e2db3f6830730f47a5b08c8b6002/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 29 09:21:03 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc.
Jan 29 09:21:03 compute-0 podman[142436]: 2026-01-29 09:21:03.023942769 +0000 UTC m=+0.104143426 container init 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 29 09:21:03 compute-0 ovn_controller[142452]: + sudo -E kolla_set_configs
Jan 29 09:21:03 compute-0 podman[142436]: 2026-01-29 09:21:03.052477083 +0000 UTC m=+0.132677750 container start 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 29 09:21:03 compute-0 edpm-start-podman-container[142436]: ovn_controller
Jan 29 09:21:03 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 29 09:21:03 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 29 09:21:03 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 29 09:21:03 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 29 09:21:03 compute-0 systemd[142493]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 29 09:21:03 compute-0 edpm-start-podman-container[142435]: Creating additional drop-in dependency for "ovn_controller" (7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc)
Jan 29 09:21:03 compute-0 podman[142459]: 2026-01-29 09:21:03.121502465 +0000 UTC m=+0.061790587 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 29 09:21:03 compute-0 systemd[1]: 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc-d7c30e30417e63b.service: Main process exited, code=exited, status=1/FAILURE
Jan 29 09:21:03 compute-0 systemd[1]: 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc-d7c30e30417e63b.service: Failed with result 'exit-code'.
Jan 29 09:21:03 compute-0 systemd[1]: Reloading.
Jan 29 09:21:03 compute-0 systemd-rc-local-generator[142539]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:21:03 compute-0 ceph-mon[75183]: pgmap v310: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:03 compute-0 systemd-sysv-generator[142543]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:21:03 compute-0 systemd[142493]: Queued start job for default target Main User Target.
Jan 29 09:21:03 compute-0 systemd[142493]: Created slice User Application Slice.
Jan 29 09:21:03 compute-0 systemd[142493]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 29 09:21:03 compute-0 systemd[142493]: Started Daily Cleanup of User's Temporary Directories.
Jan 29 09:21:03 compute-0 systemd[142493]: Reached target Paths.
Jan 29 09:21:03 compute-0 systemd[142493]: Reached target Timers.
Jan 29 09:21:03 compute-0 systemd[142493]: Starting D-Bus User Message Bus Socket...
Jan 29 09:21:03 compute-0 systemd[142493]: Starting Create User's Volatile Files and Directories...
Jan 29 09:21:03 compute-0 systemd[142493]: Finished Create User's Volatile Files and Directories.
Jan 29 09:21:03 compute-0 systemd[142493]: Listening on D-Bus User Message Bus Socket.
Jan 29 09:21:03 compute-0 systemd[142493]: Reached target Sockets.
Jan 29 09:21:03 compute-0 systemd[142493]: Reached target Basic System.
Jan 29 09:21:03 compute-0 systemd[142493]: Reached target Main User Target.
Jan 29 09:21:03 compute-0 systemd[142493]: Startup finished in 145ms.
Jan 29 09:21:03 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 29 09:21:03 compute-0 systemd[1]: Started ovn_controller container.
Jan 29 09:21:03 compute-0 systemd[1]: Started Session c1 of User root.
Jan 29 09:21:03 compute-0 sudo[142393]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:03 compute-0 ovn_controller[142452]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 29 09:21:03 compute-0 ovn_controller[142452]: INFO:__main__:Validating config file
Jan 29 09:21:03 compute-0 ovn_controller[142452]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 29 09:21:03 compute-0 ovn_controller[142452]: INFO:__main__:Writing out command to execute
Jan 29 09:21:03 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 29 09:21:03 compute-0 ovn_controller[142452]: ++ cat /run_command
Jan 29 09:21:03 compute-0 ovn_controller[142452]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 29 09:21:03 compute-0 ovn_controller[142452]: + ARGS=
Jan 29 09:21:03 compute-0 ovn_controller[142452]: + sudo kolla_copy_cacerts
Jan 29 09:21:03 compute-0 systemd[1]: Started Session c2 of User root.
Jan 29 09:21:03 compute-0 ovn_controller[142452]: + [[ ! -n '' ]]
Jan 29 09:21:03 compute-0 ovn_controller[142452]: + . kolla_extend_start
Jan 29 09:21:03 compute-0 ovn_controller[142452]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 29 09:21:03 compute-0 ovn_controller[142452]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 29 09:21:03 compute-0 ovn_controller[142452]: + umask 0022
Jan 29 09:21:03 compute-0 ovn_controller[142452]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 29 09:21:03 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 29 09:21:03 compute-0 NetworkManager[49019]: <info>  [1769678463.5340] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 29 09:21:03 compute-0 NetworkManager[49019]: <info>  [1769678463.5350] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 09:21:03 compute-0 NetworkManager[49019]: <warn>  [1769678463.5353] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 29 09:21:03 compute-0 NetworkManager[49019]: <info>  [1769678463.5360] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 29 09:21:03 compute-0 NetworkManager[49019]: <info>  [1769678463.5365] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 29 09:21:03 compute-0 NetworkManager[49019]: <info>  [1769678463.5369] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 29 09:21:03 compute-0 kernel: br-int: entered promiscuous mode
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 29 09:21:03 compute-0 systemd-udevd[142586]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 09:21:03 compute-0 NetworkManager[49019]: <info>  [1769678463.5663] manager: (ovn-c5279e-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 29 09:21:03 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 29 09:21:03 compute-0 NetworkManager[49019]: <info>  [1769678463.5873] device (genev_sys_6081): carrier: link connected
Jan 29 09:21:03 compute-0 NetworkManager[49019]: <info>  [1769678463.5877] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 29 09:21:03 compute-0 ovn_controller[142452]: 2026-01-29T09:21:03Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 29 09:21:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:21:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v311: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:04 compute-0 python3.9[142717]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 29 09:21:04 compute-0 sudo[142887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxgtghxmbhlwzntzytedolfjxlmxsdkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678464.5590522-619-221555819587690/AnsiballZ_stat.py'
Jan 29 09:21:04 compute-0 sudo[142887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:05 compute-0 ceph-mon[75183]: pgmap v311: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v312: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:06 compute-0 python3.9[142889]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:21:06 compute-0 sudo[142887]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:06 compute-0 ceph-mon[75183]: pgmap v312: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:06 compute-0 sudo[143010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehicvqrkqskqdkrybjtkwsplomjoktkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678464.5590522-619-221555819587690/AnsiballZ_copy.py'
Jan 29 09:21:06 compute-0 sudo[143010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:07 compute-0 python3.9[143012]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769678464.5590522-619-221555819587690/.source.yaml _original_basename=.iaox4rtq follow=False checksum=fd2ce409e00e3eba91faef5c96915f103dd1d424 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:21:07 compute-0 sudo[143010]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:07 compute-0 sudo[143162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgvjuonrtgckdoatxhyfycjialmvnsso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678467.321341-634-247191811686644/AnsiballZ_command.py'
Jan 29 09:21:07 compute-0 sudo[143162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:07 compute-0 python3.9[143164]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:21:07 compute-0 ovs-vsctl[143165]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 29 09:21:07 compute-0 sudo[143162]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v313: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:08 compute-0 sudo[143315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clrlzhpaaqgiymduzlgwznxoksjnaykw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678467.908697-642-98423163651291/AnsiballZ_command.py'
Jan 29 09:21:08 compute-0 sudo[143315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:08 compute-0 python3.9[143317]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:21:08 compute-0 ovs-vsctl[143319]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 29 09:21:08 compute-0 sudo[143315]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:21:09 compute-0 sudo[143470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjdyncfjcgkbskbprxjxksrlmrkaaqnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678468.956031-656-185975508826947/AnsiballZ_command.py'
Jan 29 09:21:09 compute-0 sudo[143470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:09 compute-0 ceph-mon[75183]: pgmap v313: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:09 compute-0 python3.9[143472]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:21:09 compute-0 ovs-vsctl[143473]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 29 09:21:09 compute-0 sudo[143470]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:09 compute-0 sshd-session[131261]: Connection closed by 192.168.122.30 port 56924
Jan 29 09:21:09 compute-0 sshd-session[131258]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:21:09 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Jan 29 09:21:09 compute-0 systemd[1]: session-45.scope: Consumed 51.694s CPU time.
Jan 29 09:21:09 compute-0 systemd-logind[799]: Session 45 logged out. Waiting for processes to exit.
Jan 29 09:21:09 compute-0 systemd-logind[799]: Removed session 45.
Jan 29 09:21:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v314: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:10 compute-0 ceph-mon[75183]: pgmap v314: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v315: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:13 compute-0 ceph-mon[75183]: pgmap v315: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:13 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 29 09:21:13 compute-0 systemd[142493]: Activating special unit Exit the Session...
Jan 29 09:21:13 compute-0 systemd[142493]: Stopped target Main User Target.
Jan 29 09:21:13 compute-0 systemd[142493]: Stopped target Basic System.
Jan 29 09:21:13 compute-0 systemd[142493]: Stopped target Paths.
Jan 29 09:21:13 compute-0 systemd[142493]: Stopped target Sockets.
Jan 29 09:21:13 compute-0 systemd[142493]: Stopped target Timers.
Jan 29 09:21:13 compute-0 systemd[142493]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 29 09:21:13 compute-0 systemd[142493]: Closed D-Bus User Message Bus Socket.
Jan 29 09:21:13 compute-0 systemd[142493]: Stopped Create User's Volatile Files and Directories.
Jan 29 09:21:13 compute-0 systemd[142493]: Removed slice User Application Slice.
Jan 29 09:21:13 compute-0 systemd[142493]: Reached target Shutdown.
Jan 29 09:21:13 compute-0 systemd[142493]: Finished Exit the Session.
Jan 29 09:21:13 compute-0 systemd[142493]: Reached target Exit the Session.
Jan 29 09:21:13 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 29 09:21:13 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 29 09:21:13 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 29 09:21:13 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 29 09:21:13 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 29 09:21:13 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 29 09:21:13 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 29 09:21:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:21:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v316: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:14 compute-0 ceph-mon[75183]: pgmap v316: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:15 compute-0 sshd-session[143500]: Accepted publickey for zuul from 192.168.122.30 port 40798 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:21:15 compute-0 systemd-logind[799]: New session 47 of user zuul.
Jan 29 09:21:15 compute-0 systemd[1]: Started Session 47 of User zuul.
Jan 29 09:21:15 compute-0 sshd-session[143500]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:21:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v317: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:16 compute-0 python3.9[143653]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:21:17 compute-0 ceph-mon[75183]: pgmap v317: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:17 compute-0 sudo[143807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zemoqgachgkvblcbvpbipzfuivcvseks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678477.071236-29-147327477892515/AnsiballZ_file.py'
Jan 29 09:21:17 compute-0 sudo[143807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:17 compute-0 python3.9[143809]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:21:17 compute-0 sudo[143807]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:18 compute-0 sudo[143959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgewsyhhsaqsqthllmmtnxnfauzwdcoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678477.8648717-29-41663877320677/AnsiballZ_file.py'
Jan 29 09:21:18 compute-0 sudo[143959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v318: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:18 compute-0 python3.9[143961]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:21:18 compute-0 sudo[143959]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:18 compute-0 sudo[144111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrljiuvvoudborrvdryeayswupykptah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678478.5003471-29-187021796861766/AnsiballZ_file.py'
Jan 29 09:21:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:21:18 compute-0 sudo[144111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:19 compute-0 python3.9[144113]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:21:19 compute-0 sudo[144111]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:19 compute-0 ceph-mon[75183]: pgmap v318: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:19 compute-0 sudo[144263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zswqwqlyqzsizzqgwbocenkwwtasssnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678479.1786966-29-185523837091884/AnsiballZ_file.py'
Jan 29 09:21:19 compute-0 sudo[144263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:19 compute-0 python3.9[144265]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:21:19 compute-0 sudo[144263]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:20 compute-0 sudo[144415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqflyjgofcrgzytdodfmratgrepnwcqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678479.8205752-29-87249386059720/AnsiballZ_file.py'
Jan 29 09:21:20 compute-0 sudo[144415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v319: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:20 compute-0 python3.9[144417]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:21:20 compute-0 ceph-mon[75183]: pgmap v319: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:20 compute-0 sudo[144415]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:21 compute-0 python3.9[144567]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:21:21 compute-0 sudo[144718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdhxvsmdsrizzsransbquyvcrmojuaon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678481.2953267-73-15677094610291/AnsiballZ_seboolean.py'
Jan 29 09:21:21 compute-0 sudo[144718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:21 compute-0 python3.9[144720]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 29 09:21:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v320: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:22 compute-0 ceph-mon[75183]: pgmap v320: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:22 compute-0 sudo[144718]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:23 compute-0 python3.9[144870]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:21:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:21:23 compute-0 python3.9[144991]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769678482.67672-81-73387186050463/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:21:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v321: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:24 compute-0 python3.9[145141]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:21:25 compute-0 python3.9[145262]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769678484.0944324-96-5138859071470/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:21:25 compute-0 ceph-mon[75183]: pgmap v321: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:25 compute-0 sudo[145412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-didwrubdugyskkrlgvfhfsyfghkdndan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678485.267154-113-66593086477076/AnsiballZ_setup.py'
Jan 29 09:21:25 compute-0 sudo[145412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:25 compute-0 python3.9[145414]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 09:21:26 compute-0 sudo[145412]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v322: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:21:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:21:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:21:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:21:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:21:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:21:26 compute-0 sudo[145496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmddubfrzjqcbjdjtohzezwkqdmpckyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678485.267154-113-66593086477076/AnsiballZ_dnf.py'
Jan 29 09:21:26 compute-0 sudo[145496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:26 compute-0 python3.9[145498]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:21:27 compute-0 ceph-mon[75183]: pgmap v322: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:28 compute-0 sudo[145496]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v323: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:28 compute-0 ceph-mon[75183]: pgmap v323: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:21:28 compute-0 sudo[145649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyhsgeexenfhbgqaysiwyqtrdsminoyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678488.2837272-125-135315976954889/AnsiballZ_systemd.py'
Jan 29 09:21:28 compute-0 sudo[145649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:29 compute-0 python3.9[145651]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 29 09:21:29 compute-0 sudo[145649]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:29 compute-0 python3.9[145804]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:21:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v324: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:30 compute-0 python3.9[145925]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769678489.3392184-133-173964432427289/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:21:30 compute-0 python3.9[146075]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:21:31 compute-0 ceph-mon[75183]: pgmap v324: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:31 compute-0 python3.9[146196]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769678490.342313-133-217351762191509/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:21:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v325: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:32 compute-0 ceph-mon[75183]: pgmap v325: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:32 compute-0 python3.9[146346]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:21:32 compute-0 python3.9[146467]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769678491.9200556-177-101414891939780/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:21:33 compute-0 ovn_controller[142452]: 2026-01-29T09:21:33Z|00025|memory|INFO|16000 kB peak resident set size after 29.7 seconds
Jan 29 09:21:33 compute-0 ovn_controller[142452]: 2026-01-29T09:21:33Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 29 09:21:33 compute-0 podman[146591]: 2026-01-29 09:21:33.28126792 +0000 UTC m=+0.085401158 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 29 09:21:33 compute-0 python3.9[146626]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:21:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:21:33 compute-0 python3.9[146765]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769678492.9879777-177-212024844656513/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:21:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v326: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:34 compute-0 python3.9[146915]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:21:34 compute-0 sudo[147067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gffdswvwkvmzdkgkuflvardtvzzageri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678494.5678332-215-131488489671527/AnsiballZ_file.py'
Jan 29 09:21:34 compute-0 sudo[147067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:34 compute-0 python3.9[147069]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:21:35 compute-0 sudo[147067]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:35 compute-0 ceph-mon[75183]: pgmap v326: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:35 compute-0 sudo[147219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opwwocrtkjrpnetdozgzjesokzxmgfbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678495.1692717-223-201752043109306/AnsiballZ_stat.py'
Jan 29 09:21:35 compute-0 sudo[147219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:35 compute-0 python3.9[147221]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:21:35 compute-0 sudo[147219]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:35 compute-0 sudo[147297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdqgtlqsgxypdakcfsijjplubjoquaqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678495.1692717-223-201752043109306/AnsiballZ_file.py'
Jan 29 09:21:35 compute-0 sudo[147297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:36 compute-0 python3.9[147299]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:21:36 compute-0 sudo[147297]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v327: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:36 compute-0 sudo[147449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmahyccvwdybtftdpgitwetztboclemq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678496.1461415-223-11665755893319/AnsiballZ_stat.py'
Jan 29 09:21:36 compute-0 sudo[147449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:36 compute-0 python3.9[147451]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:21:36 compute-0 sudo[147449]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:36 compute-0 sudo[147527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcnldailwithgrbjxngkoewarkxsbiru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678496.1461415-223-11665755893319/AnsiballZ_file.py'
Jan 29 09:21:36 compute-0 sudo[147527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:36 compute-0 python3.9[147529]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:21:37 compute-0 sudo[147527]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:37 compute-0 ceph-mon[75183]: pgmap v327: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:37 compute-0 sudo[147679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zisfcfajvcsenmywdobqrhfinieqimyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678497.1358078-246-239790528932395/AnsiballZ_file.py'
Jan 29 09:21:37 compute-0 sudo[147679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:37 compute-0 python3.9[147681]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:21:37 compute-0 sudo[147679]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:37 compute-0 sudo[147831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozeyuezkeafiumgieotomvjzjlrkmjtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678497.7131362-254-89119372275517/AnsiballZ_stat.py'
Jan 29 09:21:37 compute-0 sudo[147831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:38 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:21:38 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 1781 writes, 7687 keys, 1781 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.01 MB/s
                                           Cumulative WAL: 1781 writes, 1781 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1781 writes, 7687 keys, 1781 commit groups, 1.0 writes per commit group, ingest: 7.95 MB, 0.01 MB/s
                                           Interval WAL: 1781 writes, 1781 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     97.1      0.06              0.02         3    0.019       0      0       0.0       0.0
                                             L6      1/0    4.05 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    168.8    144.1      0.06              0.02         2    0.032    6017    778       0.0       0.0
                                            Sum      1/0    4.05 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     88.7    121.8      0.12              0.04         5    0.024    6017    778       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7     92.2    126.1      0.12              0.04         4    0.029    6017    778       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    168.8    144.1      0.06              0.02         2    0.032    6017    778       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    104.5      0.05              0.02         2    0.027       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.006, interval 0.005
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.01 GB write, 0.02 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.1 seconds
                                           Interval compaction: 0.01 GB write, 0.02 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55621d63f8d0#2 capacity: 308.00 MB usage: 575.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(40,505.42 KB,0.160252%) FilterBlock(6,24.23 KB,0.00768389%) IndexBlock(6,45.80 KB,0.0145206%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 29 09:21:38 compute-0 python3.9[147833]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:21:38 compute-0 sudo[147831]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v328: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:38 compute-0 ceph-mon[75183]: pgmap v328: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:38 compute-0 sudo[147909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bieytqwrrwbkhejgnfxmxljgbhyjmcjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678497.7131362-254-89119372275517/AnsiballZ_file.py'
Jan 29 09:21:38 compute-0 sudo[147909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:38 compute-0 python3.9[147911]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:21:38 compute-0 sudo[147909]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:21:38 compute-0 sudo[148061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnbiszwgpcuaoagqdsziupstdiguualg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678498.6954875-266-223779216729154/AnsiballZ_stat.py'
Jan 29 09:21:38 compute-0 sudo[148061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:39 compute-0 python3.9[148063]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:21:39 compute-0 sudo[148061]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:39 compute-0 sudo[148139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgsxqmtwcskatsufuaijnvzxehfniwrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678498.6954875-266-223779216729154/AnsiballZ_file.py'
Jan 29 09:21:39 compute-0 sudo[148139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:39 compute-0 python3.9[148141]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:21:39 compute-0 sudo[148139]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:39 compute-0 sudo[148291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyuxmxbdrlhgkpbrbjwbrqekxgdrtrgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678499.701999-278-234735820872683/AnsiballZ_systemd.py'
Jan 29 09:21:39 compute-0 sudo[148291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v329: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:40 compute-0 python3.9[148293]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:21:40 compute-0 systemd[1]: Reloading.
Jan 29 09:21:40 compute-0 systemd-rc-local-generator[148322]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:21:40 compute-0 systemd-sysv-generator[148325]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:21:40 compute-0 sudo[148291]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:40 compute-0 sudo[148481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoljprskhfakbudcnkjaxrajyrbmfatj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678500.6657844-286-236969345613475/AnsiballZ_stat.py'
Jan 29 09:21:40 compute-0 sudo[148481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:41 compute-0 python3.9[148483]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:21:41 compute-0 sudo[148481]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:41 compute-0 ceph-mon[75183]: pgmap v329: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:41 compute-0 sudo[148559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elkfphoqhtnvkuqxsghksklnmpyzkdif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678500.6657844-286-236969345613475/AnsiballZ_file.py'
Jan 29 09:21:41 compute-0 sudo[148559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:41 compute-0 python3.9[148561]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:21:41 compute-0 sudo[148559]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:41 compute-0 sudo[148711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucwkptbjnpzhnmciunwgmjqbvzvhaagk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678501.6538682-298-247311887419082/AnsiballZ_stat.py'
Jan 29 09:21:41 compute-0 sudo[148711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:42 compute-0 python3.9[148713]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:21:42 compute-0 sudo[148711]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v330: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:42 compute-0 sudo[148789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvllbfpbeminkvfjdxkwowffgcyrgyll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678501.6538682-298-247311887419082/AnsiballZ_file.py'
Jan 29 09:21:42 compute-0 sudo[148789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:42 compute-0 python3.9[148791]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:21:42 compute-0 sudo[148789]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:42 compute-0 sudo[148941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivswxrykyvcaiidqvfbnmiyqhjdqpdtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678502.617782-310-117739551185151/AnsiballZ_systemd.py'
Jan 29 09:21:42 compute-0 sudo[148941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:43 compute-0 python3.9[148943]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:21:43 compute-0 systemd[1]: Reloading.
Jan 29 09:21:43 compute-0 ceph-mon[75183]: pgmap v330: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:43 compute-0 systemd-rc-local-generator[148965]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:21:43 compute-0 systemd-sysv-generator[148972]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:21:43 compute-0 systemd[1]: Starting Create netns directory...
Jan 29 09:21:43 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 29 09:21:43 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 29 09:21:43 compute-0 systemd[1]: Finished Create netns directory.
Jan 29 09:21:43 compute-0 sudo[148941]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:21:43 compute-0 sudo[149135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fynwtppkokjrktaqgngnoghypfqtkhjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678503.7401567-320-97486650083913/AnsiballZ_file.py'
Jan 29 09:21:43 compute-0 sudo[149135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v331: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:44 compute-0 python3.9[149137]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:21:44 compute-0 sudo[149135]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:44 compute-0 ceph-mon[75183]: pgmap v331: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:44 compute-0 sudo[149287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecpuwhuxsxsvgilyoydkbnnxfjmgvsmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678504.3623095-328-200458022799344/AnsiballZ_stat.py'
Jan 29 09:21:44 compute-0 sudo[149287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:44 compute-0 python3.9[149289]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:21:44 compute-0 sudo[149287]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:45 compute-0 sudo[149410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwigbuuqesvzaeetkgikwtpicbuovytz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678504.3623095-328-200458022799344/AnsiballZ_copy.py'
Jan 29 09:21:45 compute-0 sudo[149410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:45 compute-0 python3.9[149412]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769678504.3623095-328-200458022799344/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:21:45 compute-0 sudo[149410]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:46 compute-0 sudo[149562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkyutkbkqhvnujjndjoucqwpghshglgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678505.783776-345-82489604566287/AnsiballZ_file.py'
Jan 29 09:21:46 compute-0 sudo[149562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v332: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:46 compute-0 python3.9[149564]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:21:46 compute-0 sudo[149562]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:46 compute-0 sudo[149714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crruvdwpujkmfzerlgrzgmszhtejdoqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678506.3811738-353-143469429203177/AnsiballZ_file.py'
Jan 29 09:21:46 compute-0 sudo[149714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:46 compute-0 python3.9[149716]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:21:46 compute-0 sudo[149714]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:47 compute-0 sudo[149866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkksjoeayqgvdrhoquhboesudnjzpgio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678506.9742804-361-94506623575037/AnsiballZ_stat.py'
Jan 29 09:21:47 compute-0 sudo[149866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:47 compute-0 ceph-mon[75183]: pgmap v332: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:47 compute-0 python3.9[149868]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:21:47 compute-0 sudo[149866]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:47 compute-0 sudo[149989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmsuurdvjmwebrvygeztdgmuszdrogpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678506.9742804-361-94506623575037/AnsiballZ_copy.py'
Jan 29 09:21:47 compute-0 sudo[149989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:47 compute-0 python3.9[149991]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769678506.9742804-361-94506623575037/.source.json _original_basename=.3n25ju_h follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:21:47 compute-0 sudo[149989]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v333: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:48 compute-0 python3.9[150141]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:21:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:21:49 compute-0 ceph-mon[75183]: pgmap v333: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v334: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:50 compute-0 ceph-mon[75183]: pgmap v334: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:50 compute-0 sudo[150562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mklwregneritmutoibgbjqlhkujsvwgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678509.9056358-401-229741227630785/AnsiballZ_container_config_data.py'
Jan 29 09:21:50 compute-0 sudo[150562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:50 compute-0 python3.9[150564]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 29 09:21:50 compute-0 sudo[150562]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:51 compute-0 sudo[150714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvaifkpslilcihdibbzlhrytxxuxdyis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678510.8361979-412-258110252108464/AnsiballZ_container_config_hash.py'
Jan 29 09:21:51 compute-0 sudo[150714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:51 compute-0 python3.9[150716]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 29 09:21:51 compute-0 sudo[150714]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v335: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:52 compute-0 sudo[150866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgurfjmorrfyhrmqrymskgjbhmvgmmsj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769678511.7132854-422-196795880042148/AnsiballZ_edpm_container_manage.py'
Jan 29 09:21:52 compute-0 sudo[150866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:21:52 compute-0 python3[150868]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 29 09:21:53 compute-0 ceph-mon[75183]: pgmap v335: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:21:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v336: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:54 compute-0 ceph-mon[75183]: pgmap v336: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:21:55
Jan 29 09:21:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:21:55 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:21:55 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['.mgr', 'vms', 'backups', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes']
Jan 29 09:21:55 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:21:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v337: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:56 compute-0 ceph-mon[75183]: pgmap v337: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:21:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:21:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:21:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:21:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:21:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:21:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:21:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:21:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:21:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:21:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:21:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:21:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:21:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:21:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:21:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:21:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v338: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:58 compute-0 ceph-mon[75183]: pgmap v338: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:21:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:21:58 compute-0 sudo[150963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:21:58 compute-0 sudo[150963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:21:58 compute-0 sudo[150963]: pam_unix(sudo:session): session closed for user root
Jan 29 09:21:59 compute-0 sudo[150988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:21:59 compute-0 sudo[150988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:22:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v339: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:00 compute-0 ceph-mon[75183]: pgmap v339: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:00 compute-0 sudo[150988]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:00 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:22:00 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:22:00 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:22:00 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:22:00 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:22:01 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:22:01 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:22:01 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:22:01 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:22:01 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:22:01 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:22:01 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:22:01 compute-0 sudo[151069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:22:01 compute-0 sudo[151069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:22:01 compute-0 sudo[151069]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:01 compute-0 sudo[151094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:22:01 compute-0 sudo[151094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:22:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:22:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:22:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:22:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:22:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:22:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:22:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:22:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:22:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:22:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:22:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:22:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:22:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0578630957479565e-06 of space, bias 4.0, pg target 0.0012694357148975478 quantized to 16 (current 32)
Jan 29 09:22:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:22:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:22:01 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:22:01 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:22:01 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:22:01 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:22:01 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:22:01 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:22:01 compute-0 podman[150881]: 2026-01-29 09:22:01.830464221 +0000 UTC m=+9.265236240 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 09:22:01 compute-0 podman[151134]: 2026-01-29 09:22:01.8756 +0000 UTC m=+0.047802183 container create 2fd8df68149fc012ddde3bf0f938f58863f0de130bd9de759c992850ee21bdf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_mirzakhani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 29 09:22:01 compute-0 systemd[1]: Started libpod-conmon-2fd8df68149fc012ddde3bf0f938f58863f0de130bd9de759c992850ee21bdf6.scope.
Jan 29 09:22:01 compute-0 podman[151134]: 2026-01-29 09:22:01.853871868 +0000 UTC m=+0.026074081 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:22:01 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:22:01 compute-0 podman[151134]: 2026-01-29 09:22:01.971658906 +0000 UTC m=+0.143861119 container init 2fd8df68149fc012ddde3bf0f938f58863f0de130bd9de759c992850ee21bdf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 09:22:01 compute-0 podman[151134]: 2026-01-29 09:22:01.980023834 +0000 UTC m=+0.152226017 container start 2fd8df68149fc012ddde3bf0f938f58863f0de130bd9de759c992850ee21bdf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_mirzakhani, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Jan 29 09:22:01 compute-0 podman[151134]: 2026-01-29 09:22:01.994358194 +0000 UTC m=+0.166560397 container attach 2fd8df68149fc012ddde3bf0f938f58863f0de130bd9de759c992850ee21bdf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:22:02 compute-0 elated_mirzakhani[151171]: 167 167
Jan 29 09:22:02 compute-0 systemd[1]: libpod-2fd8df68149fc012ddde3bf0f938f58863f0de130bd9de759c992850ee21bdf6.scope: Deactivated successfully.
Jan 29 09:22:02 compute-0 podman[151134]: 2026-01-29 09:22:02.004211443 +0000 UTC m=+0.176413626 container died 2fd8df68149fc012ddde3bf0f938f58863f0de130bd9de759c992850ee21bdf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 29 09:22:02 compute-0 podman[151173]: 2026-01-29 09:22:02.042939027 +0000 UTC m=+0.099389068 container create a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent)
Jan 29 09:22:02 compute-0 podman[151173]: 2026-01-29 09:22:01.9663015 +0000 UTC m=+0.022751561 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 09:22:02 compute-0 python3[150868]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 09:22:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v340: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef40f3887169aba7bcba78a17f34948ecc1398198b98d1d44d983a3cf70c2739-merged.mount: Deactivated successfully.
Jan 29 09:22:02 compute-0 podman[151134]: 2026-01-29 09:22:02.599884166 +0000 UTC m=+0.772086359 container remove 2fd8df68149fc012ddde3bf0f938f58863f0de130bd9de759c992850ee21bdf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:22:02 compute-0 systemd[1]: libpod-conmon-2fd8df68149fc012ddde3bf0f938f58863f0de130bd9de759c992850ee21bdf6.scope: Deactivated successfully.
Jan 29 09:22:02 compute-0 sudo[150866]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:02 compute-0 podman[151236]: 2026-01-29 09:22:02.733145345 +0000 UTC m=+0.041370027 container create a8714a690bac200fa033a72ff3b38d42d2317a0148a3edd66e0ff99022494f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:22:02 compute-0 systemd[1]: Started libpod-conmon-a8714a690bac200fa033a72ff3b38d42d2317a0148a3edd66e0ff99022494f8d.scope.
Jan 29 09:22:02 compute-0 ceph-mon[75183]: pgmap v340: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:02 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:22:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446f0f9354fc9f3daeab75ff031064c835fabb2d9888fd5f362a72b815db9d42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:22:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446f0f9354fc9f3daeab75ff031064c835fabb2d9888fd5f362a72b815db9d42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:22:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446f0f9354fc9f3daeab75ff031064c835fabb2d9888fd5f362a72b815db9d42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:22:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446f0f9354fc9f3daeab75ff031064c835fabb2d9888fd5f362a72b815db9d42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:22:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446f0f9354fc9f3daeab75ff031064c835fabb2d9888fd5f362a72b815db9d42/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:22:02 compute-0 podman[151236]: 2026-01-29 09:22:02.716911673 +0000 UTC m=+0.025136375 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:22:02 compute-0 podman[151236]: 2026-01-29 09:22:02.816473015 +0000 UTC m=+0.124697727 container init a8714a690bac200fa033a72ff3b38d42d2317a0148a3edd66e0ff99022494f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:22:02 compute-0 podman[151236]: 2026-01-29 09:22:02.822698204 +0000 UTC m=+0.130922886 container start a8714a690bac200fa033a72ff3b38d42d2317a0148a3edd66e0ff99022494f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_stonebraker, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 29 09:22:02 compute-0 podman[151236]: 2026-01-29 09:22:02.829629673 +0000 UTC m=+0.137854365 container attach a8714a690bac200fa033a72ff3b38d42d2317a0148a3edd66e0ff99022494f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_stonebraker, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:22:03 compute-0 sudo[151411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nryixzhfhvogddnhrechzgdqgmtuexny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678522.846976-430-211376981800934/AnsiballZ_stat.py'
Jan 29 09:22:03 compute-0 sudo[151411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:03 compute-0 python3.9[151414]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:22:03 compute-0 quizzical_stonebraker[151277]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:22:03 compute-0 quizzical_stonebraker[151277]: --> All data devices are unavailable
Jan 29 09:22:03 compute-0 sudo[151411]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:03 compute-0 systemd[1]: libpod-a8714a690bac200fa033a72ff3b38d42d2317a0148a3edd66e0ff99022494f8d.scope: Deactivated successfully.
Jan 29 09:22:03 compute-0 podman[151236]: 2026-01-29 09:22:03.328040967 +0000 UTC m=+0.636265639 container died a8714a690bac200fa033a72ff3b38d42d2317a0148a3edd66e0ff99022494f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_stonebraker, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 29 09:22:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-446f0f9354fc9f3daeab75ff031064c835fabb2d9888fd5f362a72b815db9d42-merged.mount: Deactivated successfully.
Jan 29 09:22:03 compute-0 podman[151236]: 2026-01-29 09:22:03.37549309 +0000 UTC m=+0.683717762 container remove a8714a690bac200fa033a72ff3b38d42d2317a0148a3edd66e0ff99022494f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_stonebraker, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:22:03 compute-0 systemd[1]: libpod-conmon-a8714a690bac200fa033a72ff3b38d42d2317a0148a3edd66e0ff99022494f8d.scope: Deactivated successfully.
Jan 29 09:22:03 compute-0 sudo[151094]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:03 compute-0 podman[151427]: 2026-01-29 09:22:03.440077348 +0000 UTC m=+0.083343760 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 09:22:03 compute-0 sudo[151489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:22:03 compute-0 sudo[151489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:22:03 compute-0 sudo[151489]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:03 compute-0 sudo[151514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:22:03 compute-0 sudo[151514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:22:03 compute-0 sudo[151681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjtdnmlipitbhwvqqqwmkoemppcxelzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678523.527291-439-158329697131650/AnsiballZ_file.py'
Jan 29 09:22:03 compute-0 sudo[151681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:22:03 compute-0 podman[151668]: 2026-01-29 09:22:03.789009831 +0000 UTC m=+0.039174818 container create c4784c9e7f8ff9a973554ee2db4a2ace254cab2dc1f81a51db4d21bfd94c0a8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_visvesvaraya, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Jan 29 09:22:03 compute-0 systemd[1]: Started libpod-conmon-c4784c9e7f8ff9a973554ee2db4a2ace254cab2dc1f81a51db4d21bfd94c0a8c.scope.
Jan 29 09:22:03 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:22:03 compute-0 podman[151668]: 2026-01-29 09:22:03.772920043 +0000 UTC m=+0.023085050 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:22:03 compute-0 podman[151668]: 2026-01-29 09:22:03.877745688 +0000 UTC m=+0.127910705 container init c4784c9e7f8ff9a973554ee2db4a2ace254cab2dc1f81a51db4d21bfd94c0a8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:22:03 compute-0 podman[151668]: 2026-01-29 09:22:03.884517292 +0000 UTC m=+0.134682279 container start c4784c9e7f8ff9a973554ee2db4a2ace254cab2dc1f81a51db4d21bfd94c0a8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_visvesvaraya, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Jan 29 09:22:03 compute-0 quizzical_visvesvaraya[151691]: 167 167
Jan 29 09:22:03 compute-0 systemd[1]: libpod-c4784c9e7f8ff9a973554ee2db4a2ace254cab2dc1f81a51db4d21bfd94c0a8c.scope: Deactivated successfully.
Jan 29 09:22:03 compute-0 podman[151668]: 2026-01-29 09:22:03.890834904 +0000 UTC m=+0.140999921 container attach c4784c9e7f8ff9a973554ee2db4a2ace254cab2dc1f81a51db4d21bfd94c0a8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_visvesvaraya, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:22:03 compute-0 podman[151668]: 2026-01-29 09:22:03.891283826 +0000 UTC m=+0.141448813 container died c4784c9e7f8ff9a973554ee2db4a2ace254cab2dc1f81a51db4d21bfd94c0a8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 29 09:22:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e118016427d74e7ba9c232a53acbc5f8a99c3736014fab01c70806bf1126111-merged.mount: Deactivated successfully.
Jan 29 09:22:03 compute-0 podman[151668]: 2026-01-29 09:22:03.95090611 +0000 UTC m=+0.201071097 container remove c4784c9e7f8ff9a973554ee2db4a2ace254cab2dc1f81a51db4d21bfd94c0a8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_visvesvaraya, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:22:03 compute-0 systemd[1]: libpod-conmon-c4784c9e7f8ff9a973554ee2db4a2ace254cab2dc1f81a51db4d21bfd94c0a8c.scope: Deactivated successfully.
Jan 29 09:22:03 compute-0 python3.9[151688]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:22:04 compute-0 sudo[151681]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:04 compute-0 podman[151723]: 2026-01-29 09:22:04.096359102 +0000 UTC m=+0.042681614 container create 796f8a99ae5a57b5ad1b5a45d3f5d39337fc130294386fad85841a61ab3f2623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 29 09:22:04 compute-0 systemd[1]: Started libpod-conmon-796f8a99ae5a57b5ad1b5a45d3f5d39337fc130294386fad85841a61ab3f2623.scope.
Jan 29 09:22:04 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:22:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9431cd87ac2cb2a605a15638d2fc449eb633b79aa153c0783490c15bdaf2785/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:22:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9431cd87ac2cb2a605a15638d2fc449eb633b79aa153c0783490c15bdaf2785/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:22:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9431cd87ac2cb2a605a15638d2fc449eb633b79aa153c0783490c15bdaf2785/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:22:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9431cd87ac2cb2a605a15638d2fc449eb633b79aa153c0783490c15bdaf2785/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:22:04 compute-0 podman[151723]: 2026-01-29 09:22:04.074965399 +0000 UTC m=+0.021287941 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:22:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v341: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:04 compute-0 podman[151723]: 2026-01-29 09:22:04.188519012 +0000 UTC m=+0.134841554 container init 796f8a99ae5a57b5ad1b5a45d3f5d39337fc130294386fad85841a61ab3f2623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_franklin, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:22:04 compute-0 podman[151723]: 2026-01-29 09:22:04.19764911 +0000 UTC m=+0.143971622 container start 796f8a99ae5a57b5ad1b5a45d3f5d39337fc130294386fad85841a61ab3f2623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:22:04 compute-0 podman[151723]: 2026-01-29 09:22:04.201885846 +0000 UTC m=+0.148208358 container attach 796f8a99ae5a57b5ad1b5a45d3f5d39337fc130294386fad85841a61ab3f2623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:22:04 compute-0 sudo[151814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emmjlpqjfdxpcnrvcqgamrrnvhvjdcwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678523.527291-439-158329697131650/AnsiballZ_stat.py'
Jan 29 09:22:04 compute-0 sudo[151814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:04 compute-0 python3.9[151816]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:22:04 compute-0 sudo[151814]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:04 compute-0 awesome_franklin[151783]: {
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:     "0": [
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:         {
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "devices": [
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "/dev/loop3"
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             ],
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "lv_name": "ceph_lv0",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "lv_size": "21470642176",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "name": "ceph_lv0",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "tags": {
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.cluster_name": "ceph",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.crush_device_class": "",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.encrypted": "0",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.objectstore": "bluestore",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.osd_id": "0",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.type": "block",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.vdo": "0",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.with_tpm": "0"
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             },
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "type": "block",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "vg_name": "ceph_vg0"
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:         }
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:     ],
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:     "1": [
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:         {
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "devices": [
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "/dev/loop4"
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             ],
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "lv_name": "ceph_lv1",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "lv_size": "21470642176",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "name": "ceph_lv1",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "tags": {
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.cluster_name": "ceph",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.crush_device_class": "",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.encrypted": "0",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.objectstore": "bluestore",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.osd_id": "1",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.type": "block",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.vdo": "0",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.with_tpm": "0"
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             },
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "type": "block",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "vg_name": "ceph_vg1"
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:         }
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:     ],
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:     "2": [
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:         {
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "devices": [
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "/dev/loop5"
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             ],
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "lv_name": "ceph_lv2",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "lv_size": "21470642176",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "name": "ceph_lv2",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "tags": {
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.cluster_name": "ceph",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.crush_device_class": "",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.encrypted": "0",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.objectstore": "bluestore",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.osd_id": "2",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.type": "block",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.vdo": "0",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:                 "ceph.with_tpm": "0"
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             },
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "type": "block",
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:             "vg_name": "ceph_vg2"
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:         }
Jan 29 09:22:04 compute-0 awesome_franklin[151783]:     ]
Jan 29 09:22:04 compute-0 awesome_franklin[151783]: }
Jan 29 09:22:04 compute-0 systemd[1]: libpod-796f8a99ae5a57b5ad1b5a45d3f5d39337fc130294386fad85841a61ab3f2623.scope: Deactivated successfully.
Jan 29 09:22:04 compute-0 podman[151723]: 2026-01-29 09:22:04.535356168 +0000 UTC m=+0.481678730 container died 796f8a99ae5a57b5ad1b5a45d3f5d39337fc130294386fad85841a61ab3f2623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_franklin, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 29 09:22:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9431cd87ac2cb2a605a15638d2fc449eb633b79aa153c0783490c15bdaf2785-merged.mount: Deactivated successfully.
Jan 29 09:22:04 compute-0 podman[151723]: 2026-01-29 09:22:04.583096348 +0000 UTC m=+0.529418870 container remove 796f8a99ae5a57b5ad1b5a45d3f5d39337fc130294386fad85841a61ab3f2623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_franklin, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 29 09:22:04 compute-0 systemd[1]: libpod-conmon-796f8a99ae5a57b5ad1b5a45d3f5d39337fc130294386fad85841a61ab3f2623.scope: Deactivated successfully.
Jan 29 09:22:04 compute-0 sudo[151514]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:04 compute-0 sudo[151888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:22:04 compute-0 sudo[151888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:22:04 compute-0 sudo[151888]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:04 compute-0 sudo[151935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:22:04 compute-0 sudo[151935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:22:04 compute-0 sudo[152044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlctoqmnhwiabcduguqswmkwyaoyromp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678524.4692378-439-102134664772138/AnsiballZ_copy.py'
Jan 29 09:22:04 compute-0 sudo[152044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:05 compute-0 podman[152047]: 2026-01-29 09:22:05.040057554 +0000 UTC m=+0.040453683 container create e1897c553b344dcf4adeb0ddb352a2ba3456fe1d5effca0a57319e6c48cdbd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_northcutt, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:22:05 compute-0 systemd[1]: Started libpod-conmon-e1897c553b344dcf4adeb0ddb352a2ba3456fe1d5effca0a57319e6c48cdbd1d.scope.
Jan 29 09:22:05 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:22:05 compute-0 podman[152047]: 2026-01-29 09:22:05.022403513 +0000 UTC m=+0.022799642 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:22:05 compute-0 podman[152047]: 2026-01-29 09:22:05.124904715 +0000 UTC m=+0.125300864 container init e1897c553b344dcf4adeb0ddb352a2ba3456fe1d5effca0a57319e6c48cdbd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_northcutt, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 29 09:22:05 compute-0 podman[152047]: 2026-01-29 09:22:05.130302892 +0000 UTC m=+0.130699021 container start e1897c553b344dcf4adeb0ddb352a2ba3456fe1d5effca0a57319e6c48cdbd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 29 09:22:05 compute-0 crazy_northcutt[152065]: 167 167
Jan 29 09:22:05 compute-0 systemd[1]: libpod-e1897c553b344dcf4adeb0ddb352a2ba3456fe1d5effca0a57319e6c48cdbd1d.scope: Deactivated successfully.
Jan 29 09:22:05 compute-0 podman[152047]: 2026-01-29 09:22:05.136797318 +0000 UTC m=+0.137193447 container attach e1897c553b344dcf4adeb0ddb352a2ba3456fe1d5effca0a57319e6c48cdbd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 09:22:05 compute-0 podman[152047]: 2026-01-29 09:22:05.137269191 +0000 UTC m=+0.137665320 container died e1897c553b344dcf4adeb0ddb352a2ba3456fe1d5effca0a57319e6c48cdbd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_northcutt, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:22:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-16dc442044ee5b0d0def111773d4a8840118a3a1af076d6aee65571aadac5f18-merged.mount: Deactivated successfully.
Jan 29 09:22:05 compute-0 podman[152047]: 2026-01-29 09:22:05.180738425 +0000 UTC m=+0.181134554 container remove e1897c553b344dcf4adeb0ddb352a2ba3456fe1d5effca0a57319e6c48cdbd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_northcutt, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:22:05 compute-0 python3.9[152049]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769678524.4692378-439-102134664772138/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:22:05 compute-0 systemd[1]: libpod-conmon-e1897c553b344dcf4adeb0ddb352a2ba3456fe1d5effca0a57319e6c48cdbd1d.scope: Deactivated successfully.
Jan 29 09:22:05 compute-0 sudo[152044]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:05 compute-0 ceph-mon[75183]: pgmap v341: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:05 compute-0 podman[152112]: 2026-01-29 09:22:05.324222473 +0000 UTC m=+0.039603210 container create 6c6632bf5f2d89a83f73d75dd33e98249992c836c55498b23ec044511a17f6de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 09:22:05 compute-0 systemd[1]: Started libpod-conmon-6c6632bf5f2d89a83f73d75dd33e98249992c836c55498b23ec044511a17f6de.scope.
Jan 29 09:22:05 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:22:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/891b11713be675a265d7f743482cbee7c2a490ca23107dabb3e785b716e05958/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:22:05 compute-0 podman[152112]: 2026-01-29 09:22:05.305320728 +0000 UTC m=+0.020701485 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:22:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/891b11713be675a265d7f743482cbee7c2a490ca23107dabb3e785b716e05958/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:22:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/891b11713be675a265d7f743482cbee7c2a490ca23107dabb3e785b716e05958/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:22:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/891b11713be675a265d7f743482cbee7c2a490ca23107dabb3e785b716e05958/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:22:05 compute-0 podman[152112]: 2026-01-29 09:22:05.425430369 +0000 UTC m=+0.140811156 container init 6c6632bf5f2d89a83f73d75dd33e98249992c836c55498b23ec044511a17f6de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_brattain, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:22:05 compute-0 sudo[152182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euexrpnqwusgcmcgaxkfcrhqcwttqevj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678524.4692378-439-102134664772138/AnsiballZ_systemd.py'
Jan 29 09:22:05 compute-0 sudo[152182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:05 compute-0 podman[152112]: 2026-01-29 09:22:05.432107421 +0000 UTC m=+0.147488158 container start 6c6632bf5f2d89a83f73d75dd33e98249992c836c55498b23ec044511a17f6de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_brattain, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:22:05 compute-0 podman[152112]: 2026-01-29 09:22:05.439964325 +0000 UTC m=+0.155345232 container attach 6c6632bf5f2d89a83f73d75dd33e98249992c836c55498b23ec044511a17f6de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_brattain, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:22:05 compute-0 python3.9[152185]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 29 09:22:05 compute-0 systemd[1]: Reloading.
Jan 29 09:22:05 compute-0 systemd-rc-local-generator[152227]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:22:05 compute-0 systemd-sysv-generator[152230]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:22:06 compute-0 sudo[152182]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:06 compute-0 lvm[152319]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:22:06 compute-0 lvm[152319]: VG ceph_vg1 finished
Jan 29 09:22:06 compute-0 lvm[152317]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:22:06 compute-0 lvm[152317]: VG ceph_vg0 finished
Jan 29 09:22:06 compute-0 lvm[152327]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:22:06 compute-0 lvm[152327]: VG ceph_vg2 finished
Jan 29 09:22:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v342: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:06 compute-0 sudo[152373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezzvhbecgmpwcisaqfguxmteapbwbwsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678524.4692378-439-102134664772138/AnsiballZ_systemd.py'
Jan 29 09:22:06 compute-0 sudo[152373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:06 compute-0 zen_brattain[152153]: {}
Jan 29 09:22:06 compute-0 ceph-mon[75183]: pgmap v342: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:06 compute-0 systemd[1]: libpod-6c6632bf5f2d89a83f73d75dd33e98249992c836c55498b23ec044511a17f6de.scope: Deactivated successfully.
Jan 29 09:22:06 compute-0 systemd[1]: libpod-6c6632bf5f2d89a83f73d75dd33e98249992c836c55498b23ec044511a17f6de.scope: Consumed 1.213s CPU time.
Jan 29 09:22:06 compute-0 podman[152112]: 2026-01-29 09:22:06.323641042 +0000 UTC m=+1.039021779 container died 6c6632bf5f2d89a83f73d75dd33e98249992c836c55498b23ec044511a17f6de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 29 09:22:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-891b11713be675a265d7f743482cbee7c2a490ca23107dabb3e785b716e05958-merged.mount: Deactivated successfully.
Jan 29 09:22:06 compute-0 podman[152112]: 2026-01-29 09:22:06.387045779 +0000 UTC m=+1.102426516 container remove 6c6632bf5f2d89a83f73d75dd33e98249992c836c55498b23ec044511a17f6de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_brattain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 29 09:22:06 compute-0 systemd[1]: libpod-conmon-6c6632bf5f2d89a83f73d75dd33e98249992c836c55498b23ec044511a17f6de.scope: Deactivated successfully.
Jan 29 09:22:06 compute-0 sudo[151935]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:22:06 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:22:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:22:06 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:22:06 compute-0 sudo[152389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:22:06 compute-0 sudo[152389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:22:06 compute-0 sudo[152389]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:06 compute-0 python3.9[152375]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:22:06 compute-0 systemd[1]: Reloading.
Jan 29 09:22:06 compute-0 systemd-rc-local-generator[152442]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:22:06 compute-0 systemd-sysv-generator[152446]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:22:06 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 29 09:22:06 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:22:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c9150edefc07347428c2c9bf33500ec42e1b7d4e0f354a90e2895b86081842c/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 29 09:22:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c9150edefc07347428c2c9bf33500ec42e1b7d4e0f354a90e2895b86081842c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 09:22:07 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4.
Jan 29 09:22:07 compute-0 podman[152454]: 2026-01-29 09:22:07.039844547 +0000 UTC m=+0.152473422 container init a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: + sudo -E kolla_set_configs
Jan 29 09:22:07 compute-0 podman[152454]: 2026-01-29 09:22:07.073878174 +0000 UTC m=+0.186507049 container start a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 09:22:07 compute-0 edpm-start-podman-container[152454]: ovn_metadata_agent
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: INFO:__main__:Validating config file
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: INFO:__main__:Copying service configuration files
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: INFO:__main__:Writing out command to execute
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: ++ cat /run_command
Jan 29 09:22:07 compute-0 edpm-start-podman-container[152453]: Creating additional drop-in dependency for "ovn_metadata_agent" (a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4)
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: + CMD=neutron-ovn-metadata-agent
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: + ARGS=
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: + sudo kolla_copy_cacerts
Jan 29 09:22:07 compute-0 podman[152478]: 2026-01-29 09:22:07.13357249 +0000 UTC m=+0.050619439 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: + [[ ! -n '' ]]
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: + . kolla_extend_start
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: Running command: 'neutron-ovn-metadata-agent'
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: + umask 0022
Jan 29 09:22:07 compute-0 ovn_metadata_agent[152471]: + exec neutron-ovn-metadata-agent
Jan 29 09:22:07 compute-0 systemd[1]: Reloading.
Jan 29 09:22:07 compute-0 systemd-sysv-generator[152549]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:22:07 compute-0 systemd-rc-local-generator[152546]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:22:07 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 29 09:22:07 compute-0 sudo[152373]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:07 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:22:07 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:22:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v343: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:08 compute-0 python3.9[152708]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 29 09:22:08 compute-0 sudo[152858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqbtovufvsbtqvtohszxkwodbmqtjske ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678528.6848211-484-228866636607206/AnsiballZ_stat.py'
Jan 29 09:22:08 compute-0 sudo[152858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.983 152476 INFO neutron.common.config [-] Logging enabled!
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.983 152476 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.983 152476 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.984 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.984 152476 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.984 152476 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.985 152476 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.985 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.985 152476 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.985 152476 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.985 152476 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.985 152476 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.985 152476 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.985 152476 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.986 152476 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.986 152476 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.986 152476 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.986 152476 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.986 152476 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.986 152476 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.986 152476 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.986 152476 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.986 152476 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.987 152476 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.987 152476 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.987 152476 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.987 152476 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.987 152476 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.987 152476 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.987 152476 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.987 152476 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.987 152476 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.988 152476 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.988 152476 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.988 152476 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.988 152476 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.988 152476 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.988 152476 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.988 152476 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.989 152476 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.989 152476 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.989 152476 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.989 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.989 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.989 152476 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.989 152476 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.989 152476 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.989 152476 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.989 152476 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.990 152476 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.990 152476 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.990 152476 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.990 152476 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.990 152476 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.990 152476 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.990 152476 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.990 152476 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.990 152476 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.990 152476 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.991 152476 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.991 152476 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.991 152476 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.991 152476 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.991 152476 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.991 152476 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.991 152476 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.991 152476 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.992 152476 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.992 152476 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.992 152476 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.992 152476 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.992 152476 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.992 152476 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.992 152476 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.993 152476 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.993 152476 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.993 152476 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.993 152476 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.993 152476 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.993 152476 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.993 152476 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.993 152476 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.993 152476 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.993 152476 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.994 152476 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.994 152476 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.994 152476 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.994 152476 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.994 152476 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.994 152476 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.994 152476 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.994 152476 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.994 152476 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.995 152476 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.995 152476 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.995 152476 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.995 152476 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.995 152476 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.995 152476 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.995 152476 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.995 152476 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.995 152476 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.995 152476 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.996 152476 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.996 152476 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.996 152476 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.996 152476 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.996 152476 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.996 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.996 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.996 152476 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.997 152476 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.997 152476 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.997 152476 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.997 152476 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.997 152476 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.997 152476 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.997 152476 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.997 152476 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.997 152476 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.998 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.998 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.998 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.998 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.998 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.998 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.998 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.998 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.998 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.999 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.999 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.999 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.999 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.999 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.999 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.999 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:08 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.999 152476 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:08.999 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.000 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.000 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.000 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.000 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.000 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.000 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.000 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.000 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.000 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.001 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.001 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.001 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.001 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.001 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.001 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.001 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.001 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.002 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.002 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.002 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.002 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.002 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.002 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.002 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.002 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.002 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.003 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.003 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.003 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.003 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.003 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.003 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.003 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.003 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.004 152476 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.004 152476 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.004 152476 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.004 152476 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.004 152476 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.004 152476 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.004 152476 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.004 152476 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.004 152476 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.005 152476 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.005 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.005 152476 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.005 152476 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.005 152476 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.005 152476 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.005 152476 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.005 152476 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.005 152476 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.006 152476 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.006 152476 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.006 152476 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.006 152476 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.006 152476 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.006 152476 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.006 152476 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.006 152476 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.006 152476 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.006 152476 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.007 152476 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.007 152476 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.007 152476 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.007 152476 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.007 152476 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.007 152476 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.007 152476 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.007 152476 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.007 152476 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.008 152476 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.008 152476 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.008 152476 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.008 152476 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.008 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.008 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.008 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.008 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.008 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.009 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.009 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.009 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.009 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.009 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.009 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.010 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.010 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.010 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.010 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.010 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.010 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.010 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.011 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.011 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.011 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.011 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.011 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.011 152476 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.011 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.012 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.012 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.012 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.012 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.012 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.012 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.012 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.013 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.013 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.013 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.013 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.013 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.013 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.013 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.014 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.014 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.014 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.014 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.014 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.014 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.015 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.015 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.015 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.015 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.015 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.015 152476 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.015 152476 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.016 152476 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.016 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.016 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.016 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.016 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.016 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.016 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.016 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.016 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.017 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.017 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.017 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.017 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.017 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.017 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.017 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.017 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.017 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.018 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.018 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.018 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.018 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.018 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.018 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.018 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.018 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.019 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.019 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.019 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.019 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.019 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.019 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.019 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.019 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.019 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.019 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.020 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.020 152476 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.020 152476 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.030 152476 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.030 152476 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.030 152476 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.030 152476 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.031 152476 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.044 152476 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 347a774e-f56f-46e9-8fb5-240ce07d1693 (UUID: 347a774e-f56f-46e9-8fb5-240ce07d1693) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.070 152476 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.070 152476 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.071 152476 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.071 152476 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.076 152476 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.082 152476 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.090 152476 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '347a774e-f56f-46e9-8fb5-240ce07d1693'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f49de995af0>], external_ids={}, name=347a774e-f56f-46e9-8fb5-240ce07d1693, nb_cfg_timestamp=1769678471561, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.092 152476 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f49de998b20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.093 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.093 152476 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.093 152476 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.093 152476 INFO oslo_service.service [-] Starting 1 workers
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.097 152476 DEBUG oslo_service.service [-] Started child 152861 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.100 152861 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-365466'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.101 152476 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp1qm20m3b/privsep.sock']
Jan 29 09:22:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.122 152861 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.122 152861 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.122 152861 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.127 152861 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.134 152861 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.141 152861 INFO eventlet.wsgi.server [-] (152861) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 29 09:22:09 compute-0 ceph-mon[75183]: pgmap v343: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:09 compute-0 python3.9[152860]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:22:09 compute-0 sudo[152858]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:09 compute-0 sudo[152988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gakwckehoqozkwwvjaxrmfnlobvipbyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678528.6848211-484-228866636607206/AnsiballZ_copy.py'
Jan 29 09:22:09 compute-0 sudo[152988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:09 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.877 152476 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.878 152476 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp1qm20m3b/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.698 152991 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.702 152991 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.704 152991 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.705 152991 INFO oslo.privsep.daemon [-] privsep daemon running as pid 152991
Jan 29 09:22:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:09.880 152991 DEBUG oslo.privsep.daemon [-] privsep: reply[5181df3d-85fc-4dd0-9c61-63509a227f8f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 29 09:22:09 compute-0 python3.9[152990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769678528.6848211-484-228866636607206/.source.yaml _original_basename=.12fzacbp follow=False checksum=4a9ac610446ee60acd12377b5c98a0301e3a61ca backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:22:09 compute-0 sudo[152988]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v344: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:10 compute-0 sshd-session[143503]: Connection closed by 192.168.122.30 port 40798
Jan 29 09:22:10 compute-0 sshd-session[143500]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:22:10 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Jan 29 09:22:10 compute-0 systemd[1]: session-47.scope: Consumed 51.066s CPU time.
Jan 29 09:22:10 compute-0 systemd-logind[799]: Session 47 logged out. Waiting for processes to exit.
Jan 29 09:22:10 compute-0 systemd-logind[799]: Removed session 47.
Jan 29 09:22:10 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:10.451 152991 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:22:10 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:10.451 152991 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:22:10 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:10.451 152991 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:22:10 compute-0 ceph-mon[75183]: pgmap v344: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:10 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:10.997 152991 DEBUG oslo.privsep.daemon [-] privsep: reply[dcf53f27-ea05-4f91-b553-b64dff7dfc9e]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 29 09:22:10 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:10.999 152476 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=347a774e-f56f-46e9-8fb5-240ce07d1693, column=external_ids, values=({'neutron:ovn-metadata-id': 'b5ca030a-f4df-58b5-8d50-82d6f83cd469'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.007 152476 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=347a774e-f56f-46e9-8fb5-240ce07d1693, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.015 152476 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.016 152476 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.016 152476 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.016 152476 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.016 152476 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.016 152476 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.016 152476 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.016 152476 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.016 152476 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.017 152476 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.017 152476 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.017 152476 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.017 152476 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.017 152476 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.017 152476 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.017 152476 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.018 152476 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.018 152476 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.018 152476 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.018 152476 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.018 152476 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.018 152476 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.018 152476 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.019 152476 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.019 152476 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.019 152476 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.019 152476 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.019 152476 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.020 152476 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.020 152476 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.020 152476 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.020 152476 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.020 152476 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.020 152476 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.020 152476 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.021 152476 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.021 152476 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.021 152476 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.021 152476 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.021 152476 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.021 152476 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.022 152476 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.022 152476 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.022 152476 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.022 152476 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.022 152476 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.022 152476 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.022 152476 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.023 152476 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.023 152476 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.023 152476 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.023 152476 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.023 152476 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.023 152476 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.023 152476 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.024 152476 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.024 152476 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.024 152476 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.024 152476 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.024 152476 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.024 152476 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.024 152476 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.024 152476 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.025 152476 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.025 152476 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.025 152476 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.025 152476 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.025 152476 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.025 152476 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.025 152476 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.026 152476 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.026 152476 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.026 152476 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.026 152476 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.026 152476 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.026 152476 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.026 152476 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.027 152476 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.027 152476 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.027 152476 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.027 152476 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.027 152476 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.027 152476 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.027 152476 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.028 152476 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.028 152476 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.028 152476 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.028 152476 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.028 152476 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.028 152476 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.028 152476 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.029 152476 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.029 152476 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.029 152476 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.029 152476 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.029 152476 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.029 152476 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.029 152476 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.030 152476 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.030 152476 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.030 152476 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.030 152476 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.030 152476 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.030 152476 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.030 152476 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.031 152476 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.031 152476 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.031 152476 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.031 152476 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.031 152476 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.031 152476 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.032 152476 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.032 152476 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.032 152476 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.032 152476 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.032 152476 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.032 152476 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.032 152476 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.033 152476 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.033 152476 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.033 152476 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.033 152476 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.033 152476 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.033 152476 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.034 152476 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.034 152476 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.034 152476 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.034 152476 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.034 152476 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.034 152476 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.034 152476 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.035 152476 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.035 152476 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.035 152476 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.035 152476 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.035 152476 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.035 152476 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.036 152476 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.036 152476 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.036 152476 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.036 152476 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.036 152476 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.036 152476 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.036 152476 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.036 152476 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.037 152476 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.037 152476 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.037 152476 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.037 152476 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.037 152476 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.037 152476 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.037 152476 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.038 152476 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.038 152476 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.038 152476 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.038 152476 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.038 152476 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.038 152476 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.038 152476 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.038 152476 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.038 152476 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.039 152476 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.039 152476 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.039 152476 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.039 152476 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.039 152476 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.039 152476 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.039 152476 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.039 152476 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.040 152476 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.040 152476 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.040 152476 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.040 152476 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.040 152476 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.040 152476 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.040 152476 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.041 152476 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.041 152476 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.041 152476 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.041 152476 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.041 152476 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.041 152476 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.041 152476 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.042 152476 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.042 152476 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.042 152476 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.042 152476 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.042 152476 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.042 152476 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.042 152476 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.043 152476 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.043 152476 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.043 152476 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.043 152476 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.043 152476 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.043 152476 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.043 152476 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.044 152476 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.044 152476 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.044 152476 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.044 152476 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.044 152476 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.044 152476 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.044 152476 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.044 152476 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.045 152476 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.045 152476 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.045 152476 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.045 152476 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.045 152476 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.045 152476 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.045 152476 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.046 152476 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.046 152476 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.046 152476 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.046 152476 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.046 152476 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.046 152476 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.046 152476 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.046 152476 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.047 152476 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.047 152476 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.047 152476 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.047 152476 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.047 152476 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.047 152476 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.047 152476 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.047 152476 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.048 152476 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.048 152476 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.048 152476 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.048 152476 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.048 152476 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.048 152476 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.048 152476 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.049 152476 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.049 152476 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.049 152476 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.049 152476 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.049 152476 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.049 152476 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.049 152476 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.050 152476 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.050 152476 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.050 152476 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.050 152476 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.050 152476 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.050 152476 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.050 152476 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.050 152476 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.051 152476 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.051 152476 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.051 152476 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.051 152476 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.051 152476 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.051 152476 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.051 152476 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.052 152476 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.052 152476 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.052 152476 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.052 152476 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.052 152476 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.052 152476 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.053 152476 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.053 152476 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.053 152476 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.053 152476 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.053 152476 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.053 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.053 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.054 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.054 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.054 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.054 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.054 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.054 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.055 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.055 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.055 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.055 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.055 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.055 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.055 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.056 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.056 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.056 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.056 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.056 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.056 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.056 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.057 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.057 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.057 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.057 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.057 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.057 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.058 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.058 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.058 152476 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.058 152476 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.058 152476 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.058 152476 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.059 152476 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:22:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:22:11.059 152476 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 29 09:22:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v345: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:12 compute-0 ceph-mon[75183]: pgmap v345: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:14 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:22:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v346: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:15 compute-0 ceph-mon[75183]: pgmap v346: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:15 compute-0 sshd-session[153020]: Accepted publickey for zuul from 192.168.122.30 port 44218 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:22:15 compute-0 systemd-logind[799]: New session 48 of user zuul.
Jan 29 09:22:15 compute-0 systemd[1]: Started Session 48 of User zuul.
Jan 29 09:22:15 compute-0 sshd-session[153020]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:22:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v347: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:16 compute-0 ceph-mon[75183]: pgmap v347: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:16 compute-0 python3.9[153173]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:22:17 compute-0 sudo[153327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oerulvxdjplnsjaejdntrxlcohxycoev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678536.998715-29-216030710302975/AnsiballZ_command.py'
Jan 29 09:22:17 compute-0 sudo[153327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:17 compute-0 python3.9[153329]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:22:17 compute-0 sudo[153327]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v348: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:18 compute-0 sudo[153493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npdljhshvdmwbmuqymhilvnnwjmsfblv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678538.1693966-40-28667582102862/AnsiballZ_systemd_service.py'
Jan 29 09:22:18 compute-0 sudo[153493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:19 compute-0 python3.9[153495]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 29 09:22:19 compute-0 systemd[1]: Reloading.
Jan 29 09:22:19 compute-0 systemd-rc-local-generator[153521]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:22:19 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:22:19 compute-0 systemd-sysv-generator[153526]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:22:19 compute-0 ceph-mon[75183]: pgmap v348: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:19 compute-0 sudo[153493]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:20 compute-0 python3.9[153680]: ansible-ansible.builtin.service_facts Invoked
Jan 29 09:22:20 compute-0 network[153697]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 29 09:22:20 compute-0 network[153698]: 'network-scripts' will be removed from distribution in near future.
Jan 29 09:22:20 compute-0 network[153699]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 29 09:22:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v349: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:20 compute-0 ceph-mon[75183]: pgmap v349: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v350: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:22 compute-0 ceph-mon[75183]: pgmap v350: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:22 compute-0 sudo[153959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smbvsodhejeflyryllelpibcwlghqbwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678542.3981278-59-90423923391550/AnsiballZ_systemd_service.py'
Jan 29 09:22:22 compute-0 sudo[153959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:22 compute-0 python3.9[153961]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:22:23 compute-0 sudo[153959]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:23 compute-0 sudo[154112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsxjiknluiegnsaaarbdndijvnnzugyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678543.1079912-59-135366068709372/AnsiballZ_systemd_service.py'
Jan 29 09:22:23 compute-0 sudo[154112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:23 compute-0 python3.9[154114]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:22:23 compute-0 sudo[154112]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:24 compute-0 sudo[154265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjgnnshjtrgesfgdpzsifcdqntlkfzch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678543.795907-59-178377858845343/AnsiballZ_systemd_service.py'
Jan 29 09:22:24 compute-0 sudo[154265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:22:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v351: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:24 compute-0 ceph-mon[75183]: pgmap v351: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:24 compute-0 python3.9[154267]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:22:24 compute-0 sudo[154265]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:24 compute-0 sudo[154418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxhuixeuoioldtkibmwfqzkpsqzbydos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678544.47941-59-43311056655392/AnsiballZ_systemd_service.py'
Jan 29 09:22:24 compute-0 sudo[154418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:25 compute-0 python3.9[154420]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:22:25 compute-0 sudo[154418]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:25 compute-0 sudo[154571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypjxzfzuwioqihdswztifklltokyzzlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678545.1484005-59-257220617004103/AnsiballZ_systemd_service.py'
Jan 29 09:22:25 compute-0 sudo[154571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:25 compute-0 python3.9[154573]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:22:25 compute-0 sudo[154571]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:26 compute-0 sudo[154724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbaivkqndxiyukfoljcpdyuivurfbdzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678545.8367608-59-103499298532395/AnsiballZ_systemd_service.py'
Jan 29 09:22:26 compute-0 sudo[154724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v352: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:26 compute-0 python3.9[154726]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:22:26 compute-0 sudo[154724]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:22:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:22:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:22:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:22:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:22:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:22:26 compute-0 sudo[154877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aenzfxfdngqqycynjvrtvrqnxqasvfnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678546.5140743-59-64352962789674/AnsiballZ_systemd_service.py'
Jan 29 09:22:26 compute-0 sudo[154877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:27 compute-0 python3.9[154879]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:22:27 compute-0 sudo[154877]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:27 compute-0 ceph-mon[75183]: pgmap v352: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:27 compute-0 sudo[155030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tidksztxmappdzkxettrctlonoyhmkvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678547.4324374-111-137160758417718/AnsiballZ_file.py'
Jan 29 09:22:27 compute-0 sudo[155030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:28 compute-0 python3.9[155032]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:22:28 compute-0 sudo[155030]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v353: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:28 compute-0 ceph-mon[75183]: pgmap v353: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:28 compute-0 sudo[155182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fckmaqdtlfzczlbhuqwcbownkbjqfqqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678548.2212927-111-93238050622242/AnsiballZ_file.py'
Jan 29 09:22:28 compute-0 sudo[155182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:28 compute-0 python3.9[155184]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:22:28 compute-0 sudo[155182]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:28 compute-0 sudo[155334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiubdoxmqhwgraaevimlzhbswssdqthk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678548.7538977-111-153657112095715/AnsiballZ_file.py'
Jan 29 09:22:28 compute-0 sudo[155334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:22:29 compute-0 python3.9[155336]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:22:29 compute-0 sudo[155334]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:29 compute-0 sudo[155486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtzjuxbaospolnmoupiklgasqgpqpdhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678549.2882905-111-28771357303362/AnsiballZ_file.py'
Jan 29 09:22:29 compute-0 sudo[155486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:29 compute-0 python3.9[155488]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:22:29 compute-0 sudo[155486]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:30 compute-0 sudo[155638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woyjhcmvfyijgbcxqrnewbqecfdkbpro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678549.8917022-111-135059057978640/AnsiballZ_file.py'
Jan 29 09:22:30 compute-0 sudo[155638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v354: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:30 compute-0 python3.9[155640]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:22:30 compute-0 sudo[155638]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:30 compute-0 sudo[155790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qygiufkdpvdpblsahmrmfowrgkuzsakf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678550.4617875-111-142292266450370/AnsiballZ_file.py'
Jan 29 09:22:30 compute-0 sudo[155790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:30 compute-0 python3.9[155792]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:22:30 compute-0 sudo[155790]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:31 compute-0 sudo[155942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvtxczezkrozlqtolwyshqmwbwjgownt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678551.0869794-111-155227938981611/AnsiballZ_file.py'
Jan 29 09:22:31 compute-0 sudo[155942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:31 compute-0 python3.9[155944]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:22:31 compute-0 sudo[155942]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:31 compute-0 ceph-mon[75183]: pgmap v354: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:32 compute-0 sudo[156094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlmcqcdjqntchpvpspxsdviednesbkxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678551.835507-161-73041047067572/AnsiballZ_file.py'
Jan 29 09:22:32 compute-0 sudo[156094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v355: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:32 compute-0 python3.9[156096]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:22:32 compute-0 sudo[156094]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:32 compute-0 sudo[156246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raczxrchpggiaiwrykmopdzfirgjscql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678552.4008963-161-165248650177274/AnsiballZ_file.py'
Jan 29 09:22:32 compute-0 sudo[156246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:32 compute-0 ceph-mon[75183]: pgmap v355: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:32 compute-0 python3.9[156248]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:22:32 compute-0 sudo[156246]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:33 compute-0 sudo[156398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qffijplxlncflenlqawtijodtpprvvxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678552.9484875-161-102752038386130/AnsiballZ_file.py'
Jan 29 09:22:33 compute-0 sudo[156398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:33 compute-0 python3.9[156400]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:22:33 compute-0 sudo[156398]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:33 compute-0 sudo[156561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trdiixgvnuvwdtktdlfjkzzeyewliacr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678553.532604-161-62506747244069/AnsiballZ_file.py'
Jan 29 09:22:33 compute-0 sudo[156561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:33 compute-0 podman[156524]: 2026-01-29 09:22:33.854861563 +0000 UTC m=+0.095500172 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 29 09:22:34 compute-0 python3.9[156569]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:22:34 compute-0 sudo[156561]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:34 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:22:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v356: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:34 compute-0 ceph-mon[75183]: pgmap v356: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:34 compute-0 sudo[156728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpfolpznblfaoopqxdpetjmshxylqhaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678554.1665702-161-280258033801347/AnsiballZ_file.py'
Jan 29 09:22:34 compute-0 sudo[156728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:34 compute-0 python3.9[156730]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:22:34 compute-0 sudo[156728]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:34 compute-0 sudo[156880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dairyofcwuysaoullnzjyqxdkohevsoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678554.73747-161-6716763364955/AnsiballZ_file.py'
Jan 29 09:22:34 compute-0 sudo[156880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:35 compute-0 python3.9[156882]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:22:35 compute-0 sudo[156880]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:35 compute-0 sudo[157032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dptjsgobbaozaujlslwunaunxhafvflg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678555.307032-161-122871357522085/AnsiballZ_file.py'
Jan 29 09:22:35 compute-0 sudo[157032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:35 compute-0 python3.9[157034]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:22:35 compute-0 sudo[157032]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v357: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:36 compute-0 sudo[157184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpfgkgpquuxotxablshiaflqysgiyjvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678555.9625974-212-3723860945095/AnsiballZ_command.py'
Jan 29 09:22:36 compute-0 sudo[157184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:36 compute-0 python3.9[157186]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:22:36 compute-0 sudo[157184]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:36 compute-0 ceph-mon[75183]: pgmap v357: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:37 compute-0 python3.9[157338]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 29 09:22:37 compute-0 sudo[157499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpodimyrvhkmqufuvldnjitogcezcarr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678557.4965498-230-148185928819229/AnsiballZ_systemd_service.py'
Jan 29 09:22:37 compute-0 sudo[157499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:37 compute-0 podman[157462]: 2026-01-29 09:22:37.774843564 +0000 UTC m=+0.060069767 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 29 09:22:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v358: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:38 compute-0 python3.9[157503]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 29 09:22:38 compute-0 systemd[1]: Reloading.
Jan 29 09:22:38 compute-0 systemd-rc-local-generator[157533]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:22:38 compute-0 systemd-sysv-generator[157539]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:22:38 compute-0 ceph-mon[75183]: pgmap v358: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:38 compute-0 sudo[157499]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:38 compute-0 sudo[157694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-homfwbjnruadcbrnsgrwjtitadcrptyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678558.6862428-238-241044635135021/AnsiballZ_command.py'
Jan 29 09:22:38 compute-0 sudo[157694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:39 compute-0 python3.9[157696]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:22:39 compute-0 sudo[157694]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:22:39 compute-0 sudo[157847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkailygpmwqhxghkezzqpoetzhjihzis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678559.2093256-238-91633661745159/AnsiballZ_command.py'
Jan 29 09:22:39 compute-0 sudo[157847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:39 compute-0 python3.9[157849]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:22:39 compute-0 sudo[157847]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:40 compute-0 sudo[158000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lptaqouxgkvlucbisqlobpqgvmpupvlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678559.7980158-238-195530952008460/AnsiballZ_command.py'
Jan 29 09:22:40 compute-0 sudo[158000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v359: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:40 compute-0 python3.9[158002]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:22:40 compute-0 sudo[158000]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:40 compute-0 sudo[158153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guwhndkolyacnnpajvnfsokkmtgxakiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678560.3351815-238-126118465441141/AnsiballZ_command.py'
Jan 29 09:22:40 compute-0 sudo[158153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:40 compute-0 python3.9[158155]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:22:40 compute-0 sudo[158153]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:41 compute-0 sudo[158306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrqxcdvjynuavzijfndrvkcsvzzjvoqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678560.8887703-238-156221407210691/AnsiballZ_command.py'
Jan 29 09:22:41 compute-0 sudo[158306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:41 compute-0 ceph-mon[75183]: pgmap v359: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:41 compute-0 python3.9[158308]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:22:41 compute-0 sudo[158306]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:41 compute-0 sudo[158459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdgltphmdpyolhgvptnmapktgbofjhcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678561.457799-238-166266914716893/AnsiballZ_command.py'
Jan 29 09:22:41 compute-0 sudo[158459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:41 compute-0 python3.9[158461]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:22:41 compute-0 sudo[158459]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v360: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:42 compute-0 ceph-mon[75183]: pgmap v360: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:43 compute-0 sudo[158612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcrojagykernmewenovcglawupkohmlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678563.2090993-238-65967163054346/AnsiballZ_command.py'
Jan 29 09:22:43 compute-0 sudo[158612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:43 compute-0 python3.9[158614]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:22:43 compute-0 sudo[158612]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:22:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v361: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:44 compute-0 ceph-mon[75183]: pgmap v361: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:44 compute-0 sudo[158765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niyfivljmplrwscqwcwbaskbepkozsvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678563.9978778-292-251785410587672/AnsiballZ_getent.py'
Jan 29 09:22:44 compute-0 sudo[158765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:44 compute-0 python3.9[158767]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 29 09:22:44 compute-0 sudo[158765]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:45 compute-0 sudo[158918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgxshhllnqmvietjlpxxiifggwelviqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678564.7190993-300-139588592879310/AnsiballZ_group.py'
Jan 29 09:22:45 compute-0 sudo[158918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:45 compute-0 python3.9[158920]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 29 09:22:45 compute-0 groupadd[158921]: group added to /etc/group: name=libvirt, GID=42473
Jan 29 09:22:45 compute-0 groupadd[158921]: group added to /etc/gshadow: name=libvirt
Jan 29 09:22:45 compute-0 groupadd[158921]: new group: name=libvirt, GID=42473
Jan 29 09:22:45 compute-0 sudo[158918]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:46 compute-0 sudo[159076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swizzkrdorhtqqdvgtsuxsagiecvtnia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678565.549899-308-120091026476772/AnsiballZ_user.py'
Jan 29 09:22:46 compute-0 sudo[159076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v362: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:46 compute-0 python3.9[159078]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 29 09:22:46 compute-0 useradd[159080]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 29 09:22:46 compute-0 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 09:22:46 compute-0 ceph-mon[75183]: pgmap v362: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:46 compute-0 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 09:22:46 compute-0 sudo[159076]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:46 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:22:46 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4241 writes, 19K keys, 4241 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4241 writes, 370 syncs, 11.46 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4241 writes, 19K keys, 4241 commit groups, 1.0 writes per commit group, ingest: 15.92 MB, 0.03 MB/s
                                           Interval WAL: 4241 writes, 370 syncs, 11.46 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 29 09:22:46 compute-0 sudo[159237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohkfwjdsifbmussydbntoeyqrkgcipzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678566.6549687-319-280302965834544/AnsiballZ_setup.py'
Jan 29 09:22:46 compute-0 sudo[159237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:47 compute-0 python3.9[159239]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 09:22:47 compute-0 sudo[159237]: pam_unix(sudo:session): session closed for user root
Jan 29 09:22:47 compute-0 sudo[159321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urxcflqoaybxbjjuwtagpfvdlvravbeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678566.6549687-319-280302965834544/AnsiballZ_dnf.py'
Jan 29 09:22:47 compute-0 sudo[159321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:22:48 compute-0 python3.9[159323]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:22:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v363: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:48 compute-0 ceph-mon[75183]: pgmap v363: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:22:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v364: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:50 compute-0 ceph-mon[75183]: pgmap v364: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v365: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:52 compute-0 ceph-mon[75183]: pgmap v365: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:54 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:22:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v366: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:54 compute-0 ceph-mon[75183]: pgmap v366: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:22:55
Jan 29 09:22:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:22:55 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:22:55 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['images', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'volumes', 'backups']
Jan 29 09:22:55 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:22:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v367: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:56 compute-0 ceph-mon[75183]: pgmap v367: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:22:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:22:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:22:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:22:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:22:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:22:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:22:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:22:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:22:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:22:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:22:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:22:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:22:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:22:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:22:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:22:58 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:22:58 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Cumulative writes: 4322 writes, 19K keys, 4322 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4322 writes, 406 syncs, 10.65 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4322 writes, 19K keys, 4322 commit groups, 1.0 writes per commit group, ingest: 16.03 MB, 0.03 MB/s
                                           Interval WAL: 4322 writes, 406 syncs, 10.65 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 29 09:22:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v368: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:58 compute-0 ceph-mon[75183]: pgmap v368: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:22:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:23:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v369: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:00 compute-0 ceph-mon[75183]: pgmap v369: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:23:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:23:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:23:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:23:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:23:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:23:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:23:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:23:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:23:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:23:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:23:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:23:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0578630957479565e-06 of space, bias 4.0, pg target 0.0012694357148975478 quantized to 16 (current 32)
Jan 29 09:23:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:23:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:23:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v370: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:02 compute-0 ceph-mon[75183]: pgmap v370: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:04 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:23:04 compute-0 podman[159508]: 2026-01-29 09:23:04.186040812 +0000 UTC m=+0.124034419 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 29 09:23:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v371: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:04 compute-0 ceph-mon[75183]: pgmap v371: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v372: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:06 compute-0 ceph-mon[75183]: pgmap v372: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:06 compute-0 sudo[159541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:23:06 compute-0 sudo[159541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:23:06 compute-0 sudo[159541]: pam_unix(sudo:session): session closed for user root
Jan 29 09:23:06 compute-0 sudo[159566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 29 09:23:06 compute-0 sudo[159566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:23:06 compute-0 sudo[159566]: pam_unix(sudo:session): session closed for user root
Jan 29 09:23:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:23:07 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:23:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:23:07 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:23:07 compute-0 sudo[159612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:23:07 compute-0 sudo[159612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:23:07 compute-0 sudo[159612]: pam_unix(sudo:session): session closed for user root
Jan 29 09:23:07 compute-0 sudo[159637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:23:07 compute-0 sudo[159637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:23:07 compute-0 sudo[159637]: pam_unix(sudo:session): session closed for user root
Jan 29 09:23:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:23:07 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:23:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:23:07 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:23:07 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:23:07 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Cumulative writes: 4171 writes, 19K keys, 4171 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4171 writes, 357 syncs, 11.68 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4171 writes, 19K keys, 4171 commit groups, 1.0 writes per commit group, ingest: 15.88 MB, 0.03 MB/s
                                           Interval WAL: 4171 writes, 357 syncs, 11.68 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 29 09:23:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:23:07 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:23:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:23:07 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:23:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:23:07 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:23:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:23:07 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:23:07 compute-0 sudo[159693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:23:07 compute-0 sudo[159693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:23:07 compute-0 sudo[159693]: pam_unix(sudo:session): session closed for user root
Jan 29 09:23:07 compute-0 sudo[159718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:23:07 compute-0 sudo[159718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:23:08 compute-0 podman[159755]: 2026-01-29 09:23:07.983301178 +0000 UTC m=+0.024182073 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:23:08 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:23:08 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:23:08 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:23:08 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:23:08 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:23:08 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:23:08 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:23:08 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:23:08 compute-0 podman[159755]: 2026-01-29 09:23:08.091312258 +0000 UTC m=+0.132193103 container create ed15ca42da52eae659d332a65537982c1f7613a1d6e21f514c8daa53bc2dae7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hugle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:23:08 compute-0 systemd[1]: Started libpod-conmon-ed15ca42da52eae659d332a65537982c1f7613a1d6e21f514c8daa53bc2dae7c.scope.
Jan 29 09:23:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v373: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:08 compute-0 podman[159769]: 2026-01-29 09:23:08.218930125 +0000 UTC m=+0.149076286 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 09:23:08 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:23:08 compute-0 podman[159755]: 2026-01-29 09:23:08.307321747 +0000 UTC m=+0.348202602 container init ed15ca42da52eae659d332a65537982c1f7613a1d6e21f514c8daa53bc2dae7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hugle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 09:23:08 compute-0 podman[159755]: 2026-01-29 09:23:08.318126893 +0000 UTC m=+0.359007738 container start ed15ca42da52eae659d332a65537982c1f7613a1d6e21f514c8daa53bc2dae7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 09:23:08 compute-0 eloquent_hugle[159789]: 167 167
Jan 29 09:23:08 compute-0 podman[159755]: 2026-01-29 09:23:08.324440106 +0000 UTC m=+0.365320981 container attach ed15ca42da52eae659d332a65537982c1f7613a1d6e21f514c8daa53bc2dae7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hugle, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:23:08 compute-0 podman[159755]: 2026-01-29 09:23:08.326452761 +0000 UTC m=+0.367333616 container died ed15ca42da52eae659d332a65537982c1f7613a1d6e21f514c8daa53bc2dae7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hugle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 29 09:23:08 compute-0 systemd[1]: libpod-ed15ca42da52eae659d332a65537982c1f7613a1d6e21f514c8daa53bc2dae7c.scope: Deactivated successfully.
Jan 29 09:23:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-59d984f845d372bce1d87f5cccfa97a8f44a216101bc62a335e93ff0402a496f-merged.mount: Deactivated successfully.
Jan 29 09:23:08 compute-0 podman[159755]: 2026-01-29 09:23:08.390399243 +0000 UTC m=+0.431280088 container remove ed15ca42da52eae659d332a65537982c1f7613a1d6e21f514c8daa53bc2dae7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 09:23:08 compute-0 systemd[1]: libpod-conmon-ed15ca42da52eae659d332a65537982c1f7613a1d6e21f514c8daa53bc2dae7c.scope: Deactivated successfully.
Jan 29 09:23:08 compute-0 podman[159816]: 2026-01-29 09:23:08.537103333 +0000 UTC m=+0.050392412 container create 85254138ebd8e8687dec7d4b043b705b754bba1aac1224287fc5be4a4ca01768 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_dubinsky, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 29 09:23:08 compute-0 systemd[1]: Started libpod-conmon-85254138ebd8e8687dec7d4b043b705b754bba1aac1224287fc5be4a4ca01768.scope.
Jan 29 09:23:08 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:23:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3690de8aa8068c4fec6715e72d52cfd68055bb9a65ec70d3e8381caf1a4bc22f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:23:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3690de8aa8068c4fec6715e72d52cfd68055bb9a65ec70d3e8381caf1a4bc22f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:23:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3690de8aa8068c4fec6715e72d52cfd68055bb9a65ec70d3e8381caf1a4bc22f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:23:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3690de8aa8068c4fec6715e72d52cfd68055bb9a65ec70d3e8381caf1a4bc22f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:23:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3690de8aa8068c4fec6715e72d52cfd68055bb9a65ec70d3e8381caf1a4bc22f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:23:08 compute-0 podman[159816]: 2026-01-29 09:23:08.512114958 +0000 UTC m=+0.025404027 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:23:08 compute-0 podman[159816]: 2026-01-29 09:23:08.622701299 +0000 UTC m=+0.135990358 container init 85254138ebd8e8687dec7d4b043b705b754bba1aac1224287fc5be4a4ca01768 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_dubinsky, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:23:08 compute-0 podman[159816]: 2026-01-29 09:23:08.627941222 +0000 UTC m=+0.141230271 container start 85254138ebd8e8687dec7d4b043b705b754bba1aac1224287fc5be4a4ca01768 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_dubinsky, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:23:08 compute-0 podman[159816]: 2026-01-29 09:23:08.63515659 +0000 UTC m=+0.148445859 container attach 85254138ebd8e8687dec7d4b043b705b754bba1aac1224287fc5be4a4ca01768 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_dubinsky, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 09:23:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:23:09.022 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:23:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:23:09.023 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:23:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:23:09.023 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:23:09 compute-0 tender_dubinsky[159832]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:23:09 compute-0 tender_dubinsky[159832]: --> All data devices are unavailable
Jan 29 09:23:09 compute-0 systemd[1]: libpod-85254138ebd8e8687dec7d4b043b705b754bba1aac1224287fc5be4a4ca01768.scope: Deactivated successfully.
Jan 29 09:23:09 compute-0 podman[159816]: 2026-01-29 09:23:09.070845568 +0000 UTC m=+0.584134617 container died 85254138ebd8e8687dec7d4b043b705b754bba1aac1224287fc5be4a4ca01768 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 09:23:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-3690de8aa8068c4fec6715e72d52cfd68055bb9a65ec70d3e8381caf1a4bc22f-merged.mount: Deactivated successfully.
Jan 29 09:23:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:23:09 compute-0 ceph-mon[75183]: pgmap v373: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:09 compute-0 podman[159816]: 2026-01-29 09:23:09.201365694 +0000 UTC m=+0.714654743 container remove 85254138ebd8e8687dec7d4b043b705b754bba1aac1224287fc5be4a4ca01768 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_dubinsky, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 29 09:23:09 compute-0 systemd[1]: libpod-conmon-85254138ebd8e8687dec7d4b043b705b754bba1aac1224287fc5be4a4ca01768.scope: Deactivated successfully.
Jan 29 09:23:09 compute-0 sudo[159718]: pam_unix(sudo:session): session closed for user root
Jan 29 09:23:09 compute-0 sudo[159862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:23:09 compute-0 sudo[159862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:23:09 compute-0 sudo[159862]: pam_unix(sudo:session): session closed for user root
Jan 29 09:23:09 compute-0 sudo[159887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:23:09 compute-0 sudo[159887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:23:09 compute-0 podman[159924]: 2026-01-29 09:23:09.600368317 +0000 UTC m=+0.036122420 container create 7b45e4c4ac1cbc53ccba2e923ee5966bfa44d90c77b8dc86a0f497fa88644681 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_edison, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:23:09 compute-0 systemd[1]: Started libpod-conmon-7b45e4c4ac1cbc53ccba2e923ee5966bfa44d90c77b8dc86a0f497fa88644681.scope.
Jan 29 09:23:09 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:23:09 compute-0 podman[159924]: 2026-01-29 09:23:09.668765671 +0000 UTC m=+0.104519804 container init 7b45e4c4ac1cbc53ccba2e923ee5966bfa44d90c77b8dc86a0f497fa88644681 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 09:23:09 compute-0 podman[159924]: 2026-01-29 09:23:09.673408759 +0000 UTC m=+0.109162862 container start 7b45e4c4ac1cbc53ccba2e923ee5966bfa44d90c77b8dc86a0f497fa88644681 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:23:09 compute-0 podman[159924]: 2026-01-29 09:23:09.676677868 +0000 UTC m=+0.112432001 container attach 7b45e4c4ac1cbc53ccba2e923ee5966bfa44d90c77b8dc86a0f497fa88644681 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_edison, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:23:09 compute-0 sleepy_edison[159940]: 167 167
Jan 29 09:23:09 compute-0 systemd[1]: libpod-7b45e4c4ac1cbc53ccba2e923ee5966bfa44d90c77b8dc86a0f497fa88644681.scope: Deactivated successfully.
Jan 29 09:23:09 compute-0 podman[159924]: 2026-01-29 09:23:09.678985302 +0000 UTC m=+0.114739405 container died 7b45e4c4ac1cbc53ccba2e923ee5966bfa44d90c77b8dc86a0f497fa88644681 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_edison, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 09:23:09 compute-0 podman[159924]: 2026-01-29 09:23:09.583612758 +0000 UTC m=+0.019366881 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:23:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-757e1c54b504533f0b1ba128a55534f2e0366d6b3fad8ecd91f6ae0d0e13d157-merged.mount: Deactivated successfully.
Jan 29 09:23:09 compute-0 podman[159924]: 2026-01-29 09:23:09.719017888 +0000 UTC m=+0.154771991 container remove 7b45e4c4ac1cbc53ccba2e923ee5966bfa44d90c77b8dc86a0f497fa88644681 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_edison, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 29 09:23:09 compute-0 systemd[1]: libpod-conmon-7b45e4c4ac1cbc53ccba2e923ee5966bfa44d90c77b8dc86a0f497fa88644681.scope: Deactivated successfully.
Jan 29 09:23:09 compute-0 podman[159963]: 2026-01-29 09:23:09.837979988 +0000 UTC m=+0.022939229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:23:09 compute-0 podman[159963]: 2026-01-29 09:23:09.99717026 +0000 UTC m=+0.182129501 container create 2e938d8be255e1f3ff4a86a106bea1adace3f4f27e63af44b05f4253ce25a6dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 09:23:10 compute-0 systemd[1]: Started libpod-conmon-2e938d8be255e1f3ff4a86a106bea1adace3f4f27e63af44b05f4253ce25a6dc.scope.
Jan 29 09:23:10 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8823de5596afeb9e47df61862ff5bdf3f09829d734837c94bcdc49557c61a4b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8823de5596afeb9e47df61862ff5bdf3f09829d734837c94bcdc49557c61a4b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8823de5596afeb9e47df61862ff5bdf3f09829d734837c94bcdc49557c61a4b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8823de5596afeb9e47df61862ff5bdf3f09829d734837c94bcdc49557c61a4b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:23:10 compute-0 podman[159963]: 2026-01-29 09:23:10.148357653 +0000 UTC m=+0.333316914 container init 2e938d8be255e1f3ff4a86a106bea1adace3f4f27e63af44b05f4253ce25a6dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_hellman, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:23:10 compute-0 podman[159963]: 2026-01-29 09:23:10.158279925 +0000 UTC m=+0.343239206 container start 2e938d8be255e1f3ff4a86a106bea1adace3f4f27e63af44b05f4253ce25a6dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_hellman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 09:23:10 compute-0 podman[159963]: 2026-01-29 09:23:10.16248935 +0000 UTC m=+0.347448641 container attach 2e938d8be255e1f3ff4a86a106bea1adace3f4f27e63af44b05f4253ce25a6dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 29 09:23:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v374: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:10 compute-0 ceph-mon[75183]: pgmap v374: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:10 compute-0 trusting_hellman[159980]: {
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:     "0": [
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:         {
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "devices": [
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "/dev/loop3"
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             ],
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "lv_name": "ceph_lv0",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "lv_size": "21470642176",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "name": "ceph_lv0",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "tags": {
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.cluster_name": "ceph",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.crush_device_class": "",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.encrypted": "0",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.objectstore": "bluestore",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.osd_id": "0",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.type": "block",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.vdo": "0",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.with_tpm": "0"
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             },
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "type": "block",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "vg_name": "ceph_vg0"
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:         }
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:     ],
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:     "1": [
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:         {
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "devices": [
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "/dev/loop4"
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             ],
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "lv_name": "ceph_lv1",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "lv_size": "21470642176",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "name": "ceph_lv1",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "tags": {
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.cluster_name": "ceph",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.crush_device_class": "",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.encrypted": "0",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.objectstore": "bluestore",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.osd_id": "1",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.type": "block",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.vdo": "0",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.with_tpm": "0"
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             },
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "type": "block",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "vg_name": "ceph_vg1"
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:         }
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:     ],
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:     "2": [
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:         {
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "devices": [
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "/dev/loop5"
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             ],
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "lv_name": "ceph_lv2",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "lv_size": "21470642176",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "name": "ceph_lv2",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "tags": {
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.cluster_name": "ceph",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.crush_device_class": "",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.encrypted": "0",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.objectstore": "bluestore",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.osd_id": "2",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.type": "block",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.vdo": "0",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:                 "ceph.with_tpm": "0"
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             },
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "type": "block",
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:             "vg_name": "ceph_vg2"
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:         }
Jan 29 09:23:10 compute-0 trusting_hellman[159980]:     ]
Jan 29 09:23:10 compute-0 trusting_hellman[159980]: }
Jan 29 09:23:10 compute-0 systemd[1]: libpod-2e938d8be255e1f3ff4a86a106bea1adace3f4f27e63af44b05f4253ce25a6dc.scope: Deactivated successfully.
Jan 29 09:23:10 compute-0 podman[159963]: 2026-01-29 09:23:10.453112953 +0000 UTC m=+0.638072194 container died 2e938d8be255e1f3ff4a86a106bea1adace3f4f27e63af44b05f4253ce25a6dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 29 09:23:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8823de5596afeb9e47df61862ff5bdf3f09829d734837c94bcdc49557c61a4b1-merged.mount: Deactivated successfully.
Jan 29 09:23:10 compute-0 podman[159963]: 2026-01-29 09:23:10.494322032 +0000 UTC m=+0.679281273 container remove 2e938d8be255e1f3ff4a86a106bea1adace3f4f27e63af44b05f4253ce25a6dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:23:10 compute-0 systemd[1]: libpod-conmon-2e938d8be255e1f3ff4a86a106bea1adace3f4f27e63af44b05f4253ce25a6dc.scope: Deactivated successfully.
Jan 29 09:23:10 compute-0 sudo[159887]: pam_unix(sudo:session): session closed for user root
Jan 29 09:23:10 compute-0 sudo[160001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:23:10 compute-0 sudo[160001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:23:10 compute-0 sudo[160001]: pam_unix(sudo:session): session closed for user root
Jan 29 09:23:10 compute-0 sudo[160026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:23:10 compute-0 sudo[160026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:23:10 compute-0 podman[160063]: 2026-01-29 09:23:10.925915248 +0000 UTC m=+0.039982156 container create a0dcafaf40d9b36eb73ae85f1cc906ced62a26aeb3c3c4f98a31c2aed0c753fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chatelet, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:23:10 compute-0 systemd[1]: Started libpod-conmon-a0dcafaf40d9b36eb73ae85f1cc906ced62a26aeb3c3c4f98a31c2aed0c753fd.scope.
Jan 29 09:23:10 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:23:11 compute-0 podman[160063]: 2026-01-29 09:23:11.001020466 +0000 UTC m=+0.115087394 container init a0dcafaf40d9b36eb73ae85f1cc906ced62a26aeb3c3c4f98a31c2aed0c753fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chatelet, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:23:11 compute-0 podman[160063]: 2026-01-29 09:23:10.907846623 +0000 UTC m=+0.021913551 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:23:11 compute-0 podman[160063]: 2026-01-29 09:23:11.006540178 +0000 UTC m=+0.120607126 container start a0dcafaf40d9b36eb73ae85f1cc906ced62a26aeb3c3c4f98a31c2aed0c753fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chatelet, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:23:11 compute-0 keen_chatelet[160080]: 167 167
Jan 29 09:23:11 compute-0 conmon[160080]: conmon a0dcafaf40d9b36eb73a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a0dcafaf40d9b36eb73ae85f1cc906ced62a26aeb3c3c4f98a31c2aed0c753fd.scope/container/memory.events
Jan 29 09:23:11 compute-0 systemd[1]: libpod-a0dcafaf40d9b36eb73ae85f1cc906ced62a26aeb3c3c4f98a31c2aed0c753fd.scope: Deactivated successfully.
Jan 29 09:23:11 compute-0 podman[160063]: 2026-01-29 09:23:11.011713539 +0000 UTC m=+0.125780477 container attach a0dcafaf40d9b36eb73ae85f1cc906ced62a26aeb3c3c4f98a31c2aed0c753fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chatelet, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:23:11 compute-0 podman[160063]: 2026-01-29 09:23:11.012990824 +0000 UTC m=+0.127057752 container died a0dcafaf40d9b36eb73ae85f1cc906ced62a26aeb3c3c4f98a31c2aed0c753fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 29 09:23:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-0013d555fe338216d99af7cf13a2c705c2ee91daac64272e755c0c44e4e652cb-merged.mount: Deactivated successfully.
Jan 29 09:23:11 compute-0 podman[160063]: 2026-01-29 09:23:11.056681681 +0000 UTC m=+0.170748589 container remove a0dcafaf40d9b36eb73ae85f1cc906ced62a26aeb3c3c4f98a31c2aed0c753fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 09:23:11 compute-0 systemd[1]: libpod-conmon-a0dcafaf40d9b36eb73ae85f1cc906ced62a26aeb3c3c4f98a31c2aed0c753fd.scope: Deactivated successfully.
Jan 29 09:23:11 compute-0 podman[160103]: 2026-01-29 09:23:11.178238142 +0000 UTC m=+0.039888264 container create 125891eed403a91bcbd8e8dc3c03bb668fc9027428a0021b55869352ddba2f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 09:23:11 compute-0 systemd[1]: Started libpod-conmon-125891eed403a91bcbd8e8dc3c03bb668fc9027428a0021b55869352ddba2f9f.scope.
Jan 29 09:23:11 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:23:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de5fa10b1b0d4caa27d8c42da70a118d2fdbe8c834c2b997e31d9973dfdd9093/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:23:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de5fa10b1b0d4caa27d8c42da70a118d2fdbe8c834c2b997e31d9973dfdd9093/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:23:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de5fa10b1b0d4caa27d8c42da70a118d2fdbe8c834c2b997e31d9973dfdd9093/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:23:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de5fa10b1b0d4caa27d8c42da70a118d2fdbe8c834c2b997e31d9973dfdd9093/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:23:11 compute-0 podman[160103]: 2026-01-29 09:23:11.160176187 +0000 UTC m=+0.021826329 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:23:11 compute-0 podman[160103]: 2026-01-29 09:23:11.27341196 +0000 UTC m=+0.135062122 container init 125891eed403a91bcbd8e8dc3c03bb668fc9027428a0021b55869352ddba2f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_volhard, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 09:23:11 compute-0 podman[160103]: 2026-01-29 09:23:11.28288374 +0000 UTC m=+0.144533882 container start 125891eed403a91bcbd8e8dc3c03bb668fc9027428a0021b55869352ddba2f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_volhard, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 29 09:23:11 compute-0 podman[160103]: 2026-01-29 09:23:11.290985152 +0000 UTC m=+0.152635334 container attach 125891eed403a91bcbd8e8dc3c03bb668fc9027428a0021b55869352ddba2f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_volhard, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:23:11 compute-0 lvm[160199]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:23:11 compute-0 lvm[160199]: VG ceph_vg1 finished
Jan 29 09:23:11 compute-0 lvm[160198]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:23:11 compute-0 lvm[160198]: VG ceph_vg0 finished
Jan 29 09:23:11 compute-0 lvm[160201]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:23:11 compute-0 lvm[160201]: VG ceph_vg2 finished
Jan 29 09:23:12 compute-0 focused_volhard[160120]: {}
Jan 29 09:23:12 compute-0 podman[160103]: 2026-01-29 09:23:12.096541894 +0000 UTC m=+0.958192016 container died 125891eed403a91bcbd8e8dc3c03bb668fc9027428a0021b55869352ddba2f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_volhard, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:23:12 compute-0 systemd[1]: libpod-125891eed403a91bcbd8e8dc3c03bb668fc9027428a0021b55869352ddba2f9f.scope: Deactivated successfully.
Jan 29 09:23:12 compute-0 systemd[1]: libpod-125891eed403a91bcbd8e8dc3c03bb668fc9027428a0021b55869352ddba2f9f.scope: Consumed 1.144s CPU time.
Jan 29 09:23:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-de5fa10b1b0d4caa27d8c42da70a118d2fdbe8c834c2b997e31d9973dfdd9093-merged.mount: Deactivated successfully.
Jan 29 09:23:12 compute-0 podman[160103]: 2026-01-29 09:23:12.158581803 +0000 UTC m=+1.020231925 container remove 125891eed403a91bcbd8e8dc3c03bb668fc9027428a0021b55869352ddba2f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 29 09:23:12 compute-0 systemd[1]: libpod-conmon-125891eed403a91bcbd8e8dc3c03bb668fc9027428a0021b55869352ddba2f9f.scope: Deactivated successfully.
Jan 29 09:23:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v375: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:12 compute-0 sudo[160026]: pam_unix(sudo:session): session closed for user root
Jan 29 09:23:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:23:12 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:23:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:23:12 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:23:12 compute-0 sudo[160218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:23:12 compute-0 sudo[160218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:23:12 compute-0 sudo[160218]: pam_unix(sudo:session): session closed for user root
Jan 29 09:23:13 compute-0 ceph-mon[75183]: pgmap v375: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:13 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:23:13 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:23:14 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:23:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v376: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:15 compute-0 ceph-mon[75183]: pgmap v376: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:15 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Jan 29 09:23:15 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 29 09:23:15 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 29 09:23:15 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 29 09:23:15 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 29 09:23:15 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 29 09:23:15 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 29 09:23:15 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 29 09:23:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v377: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:16 compute-0 ceph-mgr[75473]: [devicehealth INFO root] Check health
Jan 29 09:23:16 compute-0 ceph-mon[75183]: pgmap v377: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v378: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:18 compute-0 ceph-mon[75183]: pgmap v378: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:19 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:23:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v379: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:20 compute-0 ceph-mon[75183]: pgmap v379: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v380: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:22 compute-0 ceph-mon[75183]: pgmap v380: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:23:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v381: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:24 compute-0 ceph-mon[75183]: pgmap v381: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:25 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Jan 29 09:23:25 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 29 09:23:25 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 29 09:23:25 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 29 09:23:25 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 29 09:23:25 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 29 09:23:25 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 29 09:23:25 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 29 09:23:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v382: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:26 compute-0 ceph-mon[75183]: pgmap v382: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:23:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:23:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:23:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:23:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:23:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:23:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v383: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:28 compute-0 ceph-mon[75183]: pgmap v383: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:23:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v384: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:30 compute-0 ceph-mon[75183]: pgmap v384: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v385: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:32 compute-0 ceph-mon[75183]: pgmap v385: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:34 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:23:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v386: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:34 compute-0 ceph-mon[75183]: pgmap v386: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:23:34.327832) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678614327867, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1562, "num_deletes": 251, "total_data_size": 1714917, "memory_usage": 1750912, "flush_reason": "Manual Compaction"}
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678614338112, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1661723, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7445, "largest_seqno": 9006, "table_properties": {"data_size": 1654674, "index_size": 4124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14126, "raw_average_key_size": 19, "raw_value_size": 1640386, "raw_average_value_size": 2210, "num_data_blocks": 194, "num_entries": 742, "num_filter_entries": 742, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769678449, "oldest_key_time": 1769678449, "file_creation_time": 1769678614, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 10358 microseconds, and 3022 cpu microseconds.
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:23:34.338190) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1661723 bytes OK
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:23:34.338208) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:23:34.339591) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:23:34.339606) EVENT_LOG_v1 {"time_micros": 1769678614339602, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:23:34.339624) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1708148, prev total WAL file size 1708148, number of live WAL files 2.
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:23:34.340070) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1622KB)], [23(4148KB)]
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678614340128, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 5909897, "oldest_snapshot_seqno": -1}
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 2808 keys, 4722134 bytes, temperature: kUnknown
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678614364097, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 4722134, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4701215, "index_size": 12810, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7045, "raw_key_size": 65357, "raw_average_key_size": 23, "raw_value_size": 4648723, "raw_average_value_size": 1655, "num_data_blocks": 575, "num_entries": 2808, "num_filter_entries": 2808, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677896, "oldest_key_time": 0, "file_creation_time": 1769678614, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:23:34.364285) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 4722134 bytes
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:23:34.365533) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 246.0 rd, 196.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 4.1 +0.0 blob) out(4.5 +0.0 blob), read-write-amplify(6.4) write-amplify(2.8) OK, records in: 3322, records dropped: 514 output_compression: NoCompression
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:23:34.365554) EVENT_LOG_v1 {"time_micros": 1769678614365544, "job": 8, "event": "compaction_finished", "compaction_time_micros": 24023, "compaction_time_cpu_micros": 11520, "output_level": 6, "num_output_files": 1, "total_output_size": 4722134, "num_input_records": 3322, "num_output_records": 2808, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678614365788, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678614366169, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:23:34.339975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:23:34.366231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:23:34.366236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:23:34.366238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:23:34.366240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:23:34 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:23:34.366242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:23:35 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 29 09:23:35 compute-0 podman[160258]: 2026-01-29 09:23:35.15143288 +0000 UTC m=+0.081202416 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 29 09:23:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v387: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:36 compute-0 ceph-mon[75183]: pgmap v387: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v388: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:38 compute-0 ceph-mon[75183]: pgmap v388: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:39 compute-0 podman[161661]: 2026-01-29 09:23:39.100158127 +0000 UTC m=+0.039173704 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 09:23:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:23:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v389: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:40 compute-0 ceph-mon[75183]: pgmap v389: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v390: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:42 compute-0 ceph-mon[75183]: pgmap v390: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:23:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v391: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:44 compute-0 ceph-mon[75183]: pgmap v391: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v392: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:46 compute-0 ceph-mon[75183]: pgmap v392: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v393: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:48 compute-0 ceph-mon[75183]: pgmap v393: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:23:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v394: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:50 compute-0 ceph-mon[75183]: pgmap v394: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v395: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:52 compute-0 ceph-mon[75183]: pgmap v395: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:54 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:23:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v396: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:54 compute-0 ceph-mon[75183]: pgmap v396: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:23:55
Jan 29 09:23:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:23:55 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:23:55 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', '.mgr', 'backups', 'volumes']
Jan 29 09:23:55 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:23:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v397: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:56 compute-0 ceph-mon[75183]: pgmap v397: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:23:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:23:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:23:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:23:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:23:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:23:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:23:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:23:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:23:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:23:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:23:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:23:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:23:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:23:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:23:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:23:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v398: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:58 compute-0 ceph-mon[75183]: pgmap v398: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:23:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:24:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v399: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:00 compute-0 ceph-mon[75183]: pgmap v399: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:24:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:24:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:24:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:24:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:24:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:24:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:24:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:24:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:24:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:24:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:24:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:24:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0578630957479565e-06 of space, bias 4.0, pg target 0.0012694357148975478 quantized to 16 (current 32)
Jan 29 09:24:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:24:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:24:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v400: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:02 compute-0 ceph-mon[75183]: pgmap v400: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:04 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:24:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v401: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:04 compute-0 ceph-mon[75183]: pgmap v401: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:06 compute-0 podman[177176]: 2026-01-29 09:24:06.147447827 +0000 UTC m=+0.091442285 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 29 09:24:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v402: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:06 compute-0 ceph-mon[75183]: pgmap v402: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v403: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:08 compute-0 ceph-mon[75183]: pgmap v403: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:24:09.023 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:24:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:24:09.023 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:24:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:24:09.024 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:24:09 compute-0 kernel: SELinux:  Converting 2778 SID table entries...
Jan 29 09:24:09 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 29 09:24:09 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 29 09:24:09 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 29 09:24:09 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 29 09:24:09 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 29 09:24:09 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 29 09:24:09 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 29 09:24:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:24:09 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 29 09:24:09 compute-0 podman[177210]: 2026-01-29 09:24:09.785345607 +0000 UTC m=+0.046117314 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 29 09:24:10 compute-0 groupadd[177233]: group added to /etc/group: name=dnsmasq, GID=992
Jan 29 09:24:10 compute-0 groupadd[177233]: group added to /etc/gshadow: name=dnsmasq
Jan 29 09:24:10 compute-0 groupadd[177233]: new group: name=dnsmasq, GID=992
Jan 29 09:24:10 compute-0 useradd[177240]: new user: name=dnsmasq, UID=991, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 29 09:24:10 compute-0 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Jan 29 09:24:10 compute-0 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Jan 29 09:24:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v404: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:10 compute-0 ceph-mon[75183]: pgmap v404: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:11 compute-0 groupadd[177253]: group added to /etc/group: name=clevis, GID=991
Jan 29 09:24:11 compute-0 groupadd[177253]: group added to /etc/gshadow: name=clevis
Jan 29 09:24:11 compute-0 groupadd[177253]: new group: name=clevis, GID=991
Jan 29 09:24:11 compute-0 useradd[177260]: new user: name=clevis, UID=990, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 29 09:24:11 compute-0 usermod[177270]: add 'clevis' to group 'tss'
Jan 29 09:24:11 compute-0 usermod[177270]: add 'clevis' to shadow group 'tss'
Jan 29 09:24:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v405: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:12 compute-0 ceph-mon[75183]: pgmap v405: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:12 compute-0 sudo[177294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:24:12 compute-0 sudo[177294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:24:12 compute-0 sudo[177294]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:12 compute-0 sudo[177319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:24:12 compute-0 sudo[177319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:24:12 compute-0 sudo[177319]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 29 09:24:12 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 29 09:24:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:24:12 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:24:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:24:12 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:24:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:24:12 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:24:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:24:12 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:24:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:24:12 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:24:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:24:12 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:24:12 compute-0 sudo[177375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:24:12 compute-0 sudo[177375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:24:12 compute-0 sudo[177375]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:13 compute-0 sudo[177400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:24:13 compute-0 sudo[177400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:24:13 compute-0 podman[177437]: 2026-01-29 09:24:13.307319416 +0000 UTC m=+0.048698224 container create 3b83ca6f0dc93bceb9bacc999e006661c80b8263565d913ae3cf07c985c1116d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wright, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 29 09:24:13 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 29 09:24:13 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:24:13 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:24:13 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:24:13 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:24:13 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:24:13 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:24:13 compute-0 systemd[1]: Started libpod-conmon-3b83ca6f0dc93bceb9bacc999e006661c80b8263565d913ae3cf07c985c1116d.scope.
Jan 29 09:24:13 compute-0 podman[177437]: 2026-01-29 09:24:13.276660993 +0000 UTC m=+0.018039821 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:24:13 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:24:13 compute-0 podman[177437]: 2026-01-29 09:24:13.407123458 +0000 UTC m=+0.148502306 container init 3b83ca6f0dc93bceb9bacc999e006661c80b8263565d913ae3cf07c985c1116d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wright, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 09:24:13 compute-0 podman[177437]: 2026-01-29 09:24:13.413483191 +0000 UTC m=+0.154861979 container start 3b83ca6f0dc93bceb9bacc999e006661c80b8263565d913ae3cf07c985c1116d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wright, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:24:13 compute-0 systemd[1]: libpod-3b83ca6f0dc93bceb9bacc999e006661c80b8263565d913ae3cf07c985c1116d.scope: Deactivated successfully.
Jan 29 09:24:13 compute-0 practical_wright[177454]: 167 167
Jan 29 09:24:13 compute-0 conmon[177454]: conmon 3b83ca6f0dc93bceb9ba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3b83ca6f0dc93bceb9bacc999e006661c80b8263565d913ae3cf07c985c1116d.scope/container/memory.events
Jan 29 09:24:13 compute-0 podman[177437]: 2026-01-29 09:24:13.426908926 +0000 UTC m=+0.168287754 container attach 3b83ca6f0dc93bceb9bacc999e006661c80b8263565d913ae3cf07c985c1116d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:24:13 compute-0 podman[177437]: 2026-01-29 09:24:13.428061307 +0000 UTC m=+0.169440115 container died 3b83ca6f0dc93bceb9bacc999e006661c80b8263565d913ae3cf07c985c1116d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:24:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9f443751cec43619d6ff7c91b3dd60faab4bf78590c01d08f6e86f087c80f9b-merged.mount: Deactivated successfully.
Jan 29 09:24:13 compute-0 polkitd[43507]: Reloading rules
Jan 29 09:24:13 compute-0 polkitd[43507]: Collecting garbage unconditionally...
Jan 29 09:24:13 compute-0 polkitd[43507]: Loading rules from directory /etc/polkit-1/rules.d
Jan 29 09:24:13 compute-0 polkitd[43507]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 29 09:24:13 compute-0 polkitd[43507]: Finished loading, compiling and executing 3 rules
Jan 29 09:24:13 compute-0 polkitd[43507]: Reloading rules
Jan 29 09:24:13 compute-0 polkitd[43507]: Collecting garbage unconditionally...
Jan 29 09:24:13 compute-0 polkitd[43507]: Loading rules from directory /etc/polkit-1/rules.d
Jan 29 09:24:13 compute-0 polkitd[43507]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 29 09:24:13 compute-0 polkitd[43507]: Finished loading, compiling and executing 3 rules
Jan 29 09:24:13 compute-0 podman[177437]: 2026-01-29 09:24:13.536606566 +0000 UTC m=+0.277985364 container remove 3b83ca6f0dc93bceb9bacc999e006661c80b8263565d913ae3cf07c985c1116d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wright, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:24:13 compute-0 systemd[1]: libpod-conmon-3b83ca6f0dc93bceb9bacc999e006661c80b8263565d913ae3cf07c985c1116d.scope: Deactivated successfully.
Jan 29 09:24:13 compute-0 podman[177506]: 2026-01-29 09:24:13.690581909 +0000 UTC m=+0.055044226 container create 2078e5ed57c1b5425876e2742aa03d2c85a4c16679c76587cdd3738621bb78d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_leavitt, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:24:13 compute-0 podman[177506]: 2026-01-29 09:24:13.655318661 +0000 UTC m=+0.019780998 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:24:13 compute-0 systemd[1]: Started libpod-conmon-2078e5ed57c1b5425876e2742aa03d2c85a4c16679c76587cdd3738621bb78d8.scope.
Jan 29 09:24:13 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d896adb837c2373e7d72420e1080459d08159fc838ebaac29dfd7422e0ecf652/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d896adb837c2373e7d72420e1080459d08159fc838ebaac29dfd7422e0ecf652/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d896adb837c2373e7d72420e1080459d08159fc838ebaac29dfd7422e0ecf652/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d896adb837c2373e7d72420e1080459d08159fc838ebaac29dfd7422e0ecf652/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d896adb837c2373e7d72420e1080459d08159fc838ebaac29dfd7422e0ecf652/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:24:13 compute-0 podman[177506]: 2026-01-29 09:24:13.836449843 +0000 UTC m=+0.200912170 container init 2078e5ed57c1b5425876e2742aa03d2c85a4c16679c76587cdd3738621bb78d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_leavitt, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:24:13 compute-0 podman[177506]: 2026-01-29 09:24:13.842897848 +0000 UTC m=+0.207360155 container start 2078e5ed57c1b5425876e2742aa03d2c85a4c16679c76587cdd3738621bb78d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 29 09:24:13 compute-0 podman[177506]: 2026-01-29 09:24:13.8569451 +0000 UTC m=+0.221407437 container attach 2078e5ed57c1b5425876e2742aa03d2c85a4c16679c76587cdd3738621bb78d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_leavitt, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:24:14 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:24:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v406: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:14 compute-0 keen_leavitt[177547]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:24:14 compute-0 keen_leavitt[177547]: --> All data devices are unavailable
Jan 29 09:24:14 compute-0 systemd[1]: libpod-2078e5ed57c1b5425876e2742aa03d2c85a4c16679c76587cdd3738621bb78d8.scope: Deactivated successfully.
Jan 29 09:24:14 compute-0 podman[177506]: 2026-01-29 09:24:14.281681369 +0000 UTC m=+0.646143686 container died 2078e5ed57c1b5425876e2742aa03d2c85a4c16679c76587cdd3738621bb78d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_leavitt, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 09:24:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d896adb837c2373e7d72420e1080459d08159fc838ebaac29dfd7422e0ecf652-merged.mount: Deactivated successfully.
Jan 29 09:24:14 compute-0 podman[177506]: 2026-01-29 09:24:14.337181237 +0000 UTC m=+0.701643554 container remove 2078e5ed57c1b5425876e2742aa03d2c85a4c16679c76587cdd3738621bb78d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_leavitt, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 29 09:24:14 compute-0 systemd[1]: libpod-conmon-2078e5ed57c1b5425876e2742aa03d2c85a4c16679c76587cdd3738621bb78d8.scope: Deactivated successfully.
Jan 29 09:24:14 compute-0 ceph-mon[75183]: pgmap v406: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:14 compute-0 sudo[177400]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:14 compute-0 sudo[177664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:24:14 compute-0 sudo[177664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:24:14 compute-0 sudo[177664]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:14 compute-0 sudo[177707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:24:14 compute-0 sudo[177707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:24:14 compute-0 podman[177754]: 2026-01-29 09:24:14.726748852 +0000 UTC m=+0.035186317 container create 7225fe3662e10a532975247a11695843031a07f13f3db6162ba97349c44e380e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chebyshev, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 09:24:14 compute-0 systemd[1]: Started libpod-conmon-7225fe3662e10a532975247a11695843031a07f13f3db6162ba97349c44e380e.scope.
Jan 29 09:24:14 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:24:14 compute-0 podman[177754]: 2026-01-29 09:24:14.792675613 +0000 UTC m=+0.101113108 container init 7225fe3662e10a532975247a11695843031a07f13f3db6162ba97349c44e380e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chebyshev, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:24:14 compute-0 podman[177754]: 2026-01-29 09:24:14.801307778 +0000 UTC m=+0.109745253 container start 7225fe3662e10a532975247a11695843031a07f13f3db6162ba97349c44e380e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chebyshev, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 29 09:24:14 compute-0 unruffled_chebyshev[177771]: 167 167
Jan 29 09:24:14 compute-0 systemd[1]: libpod-7225fe3662e10a532975247a11695843031a07f13f3db6162ba97349c44e380e.scope: Deactivated successfully.
Jan 29 09:24:14 compute-0 podman[177754]: 2026-01-29 09:24:14.806501909 +0000 UTC m=+0.114939464 container attach 7225fe3662e10a532975247a11695843031a07f13f3db6162ba97349c44e380e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chebyshev, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 09:24:14 compute-0 podman[177754]: 2026-01-29 09:24:14.712867895 +0000 UTC m=+0.021305390 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:24:14 compute-0 podman[177754]: 2026-01-29 09:24:14.807716072 +0000 UTC m=+0.116153577 container died 7225fe3662e10a532975247a11695843031a07f13f3db6162ba97349c44e380e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chebyshev, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 29 09:24:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a26dfbfc533bf0759a1553d82c249ed66a962f8fa3abc67bbfe3c56c4d17596-merged.mount: Deactivated successfully.
Jan 29 09:24:14 compute-0 podman[177754]: 2026-01-29 09:24:14.866975272 +0000 UTC m=+0.175412747 container remove 7225fe3662e10a532975247a11695843031a07f13f3db6162ba97349c44e380e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chebyshev, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:24:14 compute-0 systemd[1]: libpod-conmon-7225fe3662e10a532975247a11695843031a07f13f3db6162ba97349c44e380e.scope: Deactivated successfully.
Jan 29 09:24:15 compute-0 podman[177795]: 2026-01-29 09:24:15.011783256 +0000 UTC m=+0.057137323 container create 9253ec0a2af6d4c276a55aa0e124e1d10341257e9642dae54c461887f27dee35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_galois, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:24:15 compute-0 systemd[1]: Started libpod-conmon-9253ec0a2af6d4c276a55aa0e124e1d10341257e9642dae54c461887f27dee35.scope.
Jan 29 09:24:15 compute-0 podman[177795]: 2026-01-29 09:24:14.98211494 +0000 UTC m=+0.027469027 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:24:15 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5289b813ab1157672e0ab2a876053b50a72a15e2d8d403a9acff2aa18e7ff7f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5289b813ab1157672e0ab2a876053b50a72a15e2d8d403a9acff2aa18e7ff7f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5289b813ab1157672e0ab2a876053b50a72a15e2d8d403a9acff2aa18e7ff7f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5289b813ab1157672e0ab2a876053b50a72a15e2d8d403a9acff2aa18e7ff7f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:24:15 compute-0 podman[177795]: 2026-01-29 09:24:15.106236652 +0000 UTC m=+0.151590729 container init 9253ec0a2af6d4c276a55aa0e124e1d10341257e9642dae54c461887f27dee35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_galois, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:24:15 compute-0 podman[177795]: 2026-01-29 09:24:15.116659365 +0000 UTC m=+0.162013422 container start 9253ec0a2af6d4c276a55aa0e124e1d10341257e9642dae54c461887f27dee35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:24:15 compute-0 podman[177795]: 2026-01-29 09:24:15.134734327 +0000 UTC m=+0.180088384 container attach 9253ec0a2af6d4c276a55aa0e124e1d10341257e9642dae54c461887f27dee35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_galois, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 29 09:24:15 compute-0 charming_galois[177812]: {
Jan 29 09:24:15 compute-0 charming_galois[177812]:     "0": [
Jan 29 09:24:15 compute-0 charming_galois[177812]:         {
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "devices": [
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "/dev/loop3"
Jan 29 09:24:15 compute-0 charming_galois[177812]:             ],
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "lv_name": "ceph_lv0",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "lv_size": "21470642176",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "name": "ceph_lv0",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "tags": {
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.cluster_name": "ceph",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.crush_device_class": "",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.encrypted": "0",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.objectstore": "bluestore",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.osd_id": "0",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.type": "block",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.vdo": "0",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.with_tpm": "0"
Jan 29 09:24:15 compute-0 charming_galois[177812]:             },
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "type": "block",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "vg_name": "ceph_vg0"
Jan 29 09:24:15 compute-0 charming_galois[177812]:         }
Jan 29 09:24:15 compute-0 charming_galois[177812]:     ],
Jan 29 09:24:15 compute-0 charming_galois[177812]:     "1": [
Jan 29 09:24:15 compute-0 charming_galois[177812]:         {
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "devices": [
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "/dev/loop4"
Jan 29 09:24:15 compute-0 charming_galois[177812]:             ],
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "lv_name": "ceph_lv1",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "lv_size": "21470642176",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "name": "ceph_lv1",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "tags": {
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.cluster_name": "ceph",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.crush_device_class": "",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.encrypted": "0",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.objectstore": "bluestore",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.osd_id": "1",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.type": "block",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.vdo": "0",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.with_tpm": "0"
Jan 29 09:24:15 compute-0 charming_galois[177812]:             },
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "type": "block",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "vg_name": "ceph_vg1"
Jan 29 09:24:15 compute-0 charming_galois[177812]:         }
Jan 29 09:24:15 compute-0 charming_galois[177812]:     ],
Jan 29 09:24:15 compute-0 charming_galois[177812]:     "2": [
Jan 29 09:24:15 compute-0 charming_galois[177812]:         {
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "devices": [
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "/dev/loop5"
Jan 29 09:24:15 compute-0 charming_galois[177812]:             ],
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "lv_name": "ceph_lv2",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "lv_size": "21470642176",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "name": "ceph_lv2",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "tags": {
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.cluster_name": "ceph",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.crush_device_class": "",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.encrypted": "0",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.objectstore": "bluestore",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.osd_id": "2",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.type": "block",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.vdo": "0",
Jan 29 09:24:15 compute-0 charming_galois[177812]:                 "ceph.with_tpm": "0"
Jan 29 09:24:15 compute-0 charming_galois[177812]:             },
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "type": "block",
Jan 29 09:24:15 compute-0 charming_galois[177812]:             "vg_name": "ceph_vg2"
Jan 29 09:24:15 compute-0 charming_galois[177812]:         }
Jan 29 09:24:15 compute-0 charming_galois[177812]:     ]
Jan 29 09:24:15 compute-0 charming_galois[177812]: }
Jan 29 09:24:15 compute-0 systemd[1]: libpod-9253ec0a2af6d4c276a55aa0e124e1d10341257e9642dae54c461887f27dee35.scope: Deactivated successfully.
Jan 29 09:24:15 compute-0 podman[177795]: 2026-01-29 09:24:15.394608647 +0000 UTC m=+0.439962704 container died 9253ec0a2af6d4c276a55aa0e124e1d10341257e9642dae54c461887f27dee35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_galois, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 29 09:24:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5289b813ab1157672e0ab2a876053b50a72a15e2d8d403a9acff2aa18e7ff7f7-merged.mount: Deactivated successfully.
Jan 29 09:24:15 compute-0 podman[177795]: 2026-01-29 09:24:15.436506756 +0000 UTC m=+0.481860813 container remove 9253ec0a2af6d4c276a55aa0e124e1d10341257e9642dae54c461887f27dee35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_galois, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:24:15 compute-0 systemd[1]: libpod-conmon-9253ec0a2af6d4c276a55aa0e124e1d10341257e9642dae54c461887f27dee35.scope: Deactivated successfully.
Jan 29 09:24:15 compute-0 sudo[177707]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:15 compute-0 sudo[177833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:24:15 compute-0 sudo[177833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:24:15 compute-0 sudo[177833]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:15 compute-0 sudo[177858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:24:15 compute-0 sudo[177858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:24:15 compute-0 podman[177894]: 2026-01-29 09:24:15.808002749 +0000 UTC m=+0.043134003 container create 5cb372982aac30497d5f15d3e4b6b54cd6464501eb9171ce053a8f40b829dcdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:24:15 compute-0 systemd[1]: Started libpod-conmon-5cb372982aac30497d5f15d3e4b6b54cd6464501eb9171ce053a8f40b829dcdb.scope.
Jan 29 09:24:15 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:24:15 compute-0 podman[177894]: 2026-01-29 09:24:15.878715839 +0000 UTC m=+0.113847113 container init 5cb372982aac30497d5f15d3e4b6b54cd6464501eb9171ce053a8f40b829dcdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 09:24:15 compute-0 podman[177894]: 2026-01-29 09:24:15.883252153 +0000 UTC m=+0.118383417 container start 5cb372982aac30497d5f15d3e4b6b54cd6464501eb9171ce053a8f40b829dcdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendel, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 29 09:24:15 compute-0 podman[177894]: 2026-01-29 09:24:15.787714238 +0000 UTC m=+0.022845512 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:24:15 compute-0 podman[177894]: 2026-01-29 09:24:15.886711967 +0000 UTC m=+0.121843251 container attach 5cb372982aac30497d5f15d3e4b6b54cd6464501eb9171ce053a8f40b829dcdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:24:15 compute-0 inspiring_mendel[177910]: 167 167
Jan 29 09:24:15 compute-0 systemd[1]: libpod-5cb372982aac30497d5f15d3e4b6b54cd6464501eb9171ce053a8f40b829dcdb.scope: Deactivated successfully.
Jan 29 09:24:15 compute-0 podman[177894]: 2026-01-29 09:24:15.888464874 +0000 UTC m=+0.123596128 container died 5cb372982aac30497d5f15d3e4b6b54cd6464501eb9171ce053a8f40b829dcdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 29 09:24:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e215516ba472c43295ce698186429b53c16d7c66d2fc8dbeb4ec573a63b42966-merged.mount: Deactivated successfully.
Jan 29 09:24:15 compute-0 podman[177894]: 2026-01-29 09:24:15.929060767 +0000 UTC m=+0.164192021 container remove 5cb372982aac30497d5f15d3e4b6b54cd6464501eb9171ce053a8f40b829dcdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 29 09:24:15 compute-0 systemd[1]: libpod-conmon-5cb372982aac30497d5f15d3e4b6b54cd6464501eb9171ce053a8f40b829dcdb.scope: Deactivated successfully.
Jan 29 09:24:16 compute-0 podman[177934]: 2026-01-29 09:24:16.058375931 +0000 UTC m=+0.041170730 container create 61b7b2fa3c9c4761540048a7089939055b207f9a3113c5b71433eb70af2118f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:24:16 compute-0 systemd[1]: Started libpod-conmon-61b7b2fa3c9c4761540048a7089939055b207f9a3113c5b71433eb70af2118f0.scope.
Jan 29 09:24:16 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:24:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c373591d1cb7f26eacbf1ba04fdbed848d6dd0a379ef9c21d176c00bbaee06a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:24:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c373591d1cb7f26eacbf1ba04fdbed848d6dd0a379ef9c21d176c00bbaee06a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:24:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c373591d1cb7f26eacbf1ba04fdbed848d6dd0a379ef9c21d176c00bbaee06a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:24:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c373591d1cb7f26eacbf1ba04fdbed848d6dd0a379ef9c21d176c00bbaee06a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:24:16 compute-0 podman[177934]: 2026-01-29 09:24:16.037703379 +0000 UTC m=+0.020498228 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:24:16 compute-0 podman[177934]: 2026-01-29 09:24:16.14963003 +0000 UTC m=+0.132424849 container init 61b7b2fa3c9c4761540048a7089939055b207f9a3113c5b71433eb70af2118f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_gould, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Jan 29 09:24:16 compute-0 podman[177934]: 2026-01-29 09:24:16.165646395 +0000 UTC m=+0.148441224 container start 61b7b2fa3c9c4761540048a7089939055b207f9a3113c5b71433eb70af2118f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_gould, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:24:16 compute-0 podman[177934]: 2026-01-29 09:24:16.169736796 +0000 UTC m=+0.152531615 container attach 61b7b2fa3c9c4761540048a7089939055b207f9a3113c5b71433eb70af2118f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:24:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v407: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:16 compute-0 ceph-mon[75183]: pgmap v407: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:16 compute-0 lvm[178029]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:24:16 compute-0 lvm[178030]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:24:16 compute-0 lvm[178029]: VG ceph_vg0 finished
Jan 29 09:24:16 compute-0 lvm[178030]: VG ceph_vg1 finished
Jan 29 09:24:16 compute-0 lvm[178032]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:24:16 compute-0 lvm[178032]: VG ceph_vg2 finished
Jan 29 09:24:16 compute-0 cool_gould[177951]: {}
Jan 29 09:24:16 compute-0 systemd[1]: libpod-61b7b2fa3c9c4761540048a7089939055b207f9a3113c5b71433eb70af2118f0.scope: Deactivated successfully.
Jan 29 09:24:16 compute-0 podman[177934]: 2026-01-29 09:24:16.895953927 +0000 UTC m=+0.878748746 container died 61b7b2fa3c9c4761540048a7089939055b207f9a3113c5b71433eb70af2118f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_gould, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 09:24:16 compute-0 systemd[1]: libpod-61b7b2fa3c9c4761540048a7089939055b207f9a3113c5b71433eb70af2118f0.scope: Consumed 1.013s CPU time.
Jan 29 09:24:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c373591d1cb7f26eacbf1ba04fdbed848d6dd0a379ef9c21d176c00bbaee06a9-merged.mount: Deactivated successfully.
Jan 29 09:24:16 compute-0 podman[177934]: 2026-01-29 09:24:16.952669188 +0000 UTC m=+0.935463987 container remove 61b7b2fa3c9c4761540048a7089939055b207f9a3113c5b71433eb70af2118f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_gould, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Jan 29 09:24:16 compute-0 systemd[1]: libpod-conmon-61b7b2fa3c9c4761540048a7089939055b207f9a3113c5b71433eb70af2118f0.scope: Deactivated successfully.
Jan 29 09:24:17 compute-0 sudo[177858]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:17 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:24:17 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:24:17 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:24:17 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:24:17 compute-0 sudo[178047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:24:17 compute-0 sudo[178047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:24:17 compute-0 sudo[178047]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:18 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:24:18 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:24:18 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 29 09:24:18 compute-0 sshd[999]: Received signal 15; terminating.
Jan 29 09:24:18 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 29 09:24:18 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 29 09:24:18 compute-0 systemd[1]: sshd.service: Consumed 1.955s CPU time, read 32.0K from disk, written 0B to disk.
Jan 29 09:24:18 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 29 09:24:18 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 29 09:24:18 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 29 09:24:18 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 29 09:24:18 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 29 09:24:18 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 29 09:24:18 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 29 09:24:18 compute-0 sshd[178678]: Server listening on 0.0.0.0 port 22.
Jan 29 09:24:18 compute-0 sshd[178678]: Server listening on :: port 22.
Jan 29 09:24:18 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 29 09:24:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v408: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:19 compute-0 ceph-mon[75183]: pgmap v408: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:19 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:24:19 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 29 09:24:19 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 29 09:24:19 compute-0 systemd[1]: Reloading.
Jan 29 09:24:19 compute-0 systemd-rc-local-generator[178934]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:24:19 compute-0 systemd-sysv-generator[178938]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:24:19 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 29 09:24:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v409: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v410: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:22 compute-0 ceph-mon[75183]: pgmap v409: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:23 compute-0 sudo[159321]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:23 compute-0 ceph-mon[75183]: pgmap v410: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:23 compute-0 sudo[185670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibhiwhjigjpjmiocsnrfdeppbljhkrcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678663.354801-331-99711564173736/AnsiballZ_systemd.py'
Jan 29 09:24:23 compute-0 sudo[185670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:24:24 compute-0 python3.9[185698]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 29 09:24:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v411: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:24 compute-0 systemd[1]: Reloading.
Jan 29 09:24:24 compute-0 systemd-sysv-generator[186376]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:24:24 compute-0 systemd-rc-local-generator[186373]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:24:24 compute-0 ceph-mon[75183]: pgmap v411: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:24 compute-0 sudo[185670]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:24 compute-0 sudo[187143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nidawaluolffrouybduxnbhhboxrdnmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678664.691654-331-269706902801345/AnsiballZ_systemd.py'
Jan 29 09:24:24 compute-0 sudo[187143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:25 compute-0 python3.9[187162]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 29 09:24:25 compute-0 systemd[1]: Reloading.
Jan 29 09:24:25 compute-0 systemd-sysv-generator[187621]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:24:25 compute-0 systemd-rc-local-generator[187617]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:24:25 compute-0 sudo[187143]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:25 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 29 09:24:25 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 29 09:24:25 compute-0 systemd[1]: man-db-cache-update.service: Consumed 7.582s CPU time.
Jan 29 09:24:25 compute-0 systemd[1]: run-r6ff794f06d7947099efe9bdd40be1d71.service: Deactivated successfully.
Jan 29 09:24:26 compute-0 sudo[187853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyhciqzcyasakxqswmytwrouxzijvzlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678665.7611961-331-95158229292037/AnsiballZ_systemd.py'
Jan 29 09:24:26 compute-0 sudo[187853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v412: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:26 compute-0 python3.9[187855]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 29 09:24:26 compute-0 ceph-mon[75183]: pgmap v412: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:26 compute-0 systemd[1]: Reloading.
Jan 29 09:24:26 compute-0 systemd-sysv-generator[187882]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:24:26 compute-0 systemd-rc-local-generator[187878]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:24:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:24:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:24:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:24:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:24:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:24:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:24:26 compute-0 sudo[187853]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:27 compute-0 sudo[188043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpxhpvbfqyjryuacwgosfzyrluksdfps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678666.7345-331-75416861127397/AnsiballZ_systemd.py'
Jan 29 09:24:27 compute-0 sudo[188043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:27 compute-0 python3.9[188045]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 29 09:24:27 compute-0 systemd[1]: Reloading.
Jan 29 09:24:27 compute-0 systemd-rc-local-generator[188075]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:24:27 compute-0 systemd-sysv-generator[188078]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:24:27 compute-0 sudo[188043]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:28 compute-0 sudo[188233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eanjaldvgsqpbiakznbkjmwrjrxkcnzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678667.8317091-360-273282251371540/AnsiballZ_systemd.py'
Jan 29 09:24:28 compute-0 sudo[188233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v413: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:28 compute-0 ceph-mon[75183]: pgmap v413: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:28 compute-0 python3.9[188235]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 09:24:28 compute-0 systemd[1]: Reloading.
Jan 29 09:24:28 compute-0 systemd-sysv-generator[188266]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:24:28 compute-0 systemd-rc-local-generator[188263]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:24:28 compute-0 sudo[188233]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:24:29 compute-0 sudo[188423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agtxjpxsfntbvozzpsaahofxxeajtegv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678668.9064145-360-75349054270859/AnsiballZ_systemd.py'
Jan 29 09:24:29 compute-0 sudo[188423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:29 compute-0 python3.9[188425]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 09:24:29 compute-0 systemd[1]: Reloading.
Jan 29 09:24:29 compute-0 systemd-sysv-generator[188460]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:24:29 compute-0 systemd-rc-local-generator[188456]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:24:29 compute-0 sudo[188423]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v414: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:30 compute-0 sudo[188613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkgflqkqvjccgtiaflmdhmbiluwfgcox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678670.0485191-360-104421200347533/AnsiballZ_systemd.py'
Jan 29 09:24:30 compute-0 sudo[188613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:30 compute-0 ceph-mon[75183]: pgmap v414: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:30 compute-0 python3.9[188615]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 09:24:30 compute-0 systemd[1]: Reloading.
Jan 29 09:24:30 compute-0 systemd-sysv-generator[188647]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:24:30 compute-0 systemd-rc-local-generator[188643]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:24:31 compute-0 sudo[188613]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:31 compute-0 sudo[188803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeurtwyavgtkawxyftitepqyjytmibus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678671.1711771-360-132959356243372/AnsiballZ_systemd.py'
Jan 29 09:24:31 compute-0 sudo[188803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:31 compute-0 python3.9[188805]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 09:24:31 compute-0 sudo[188803]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:32 compute-0 sudo[188958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gguxewogwanryaeohsfoniqkdkrowpfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678671.89559-360-261092197241159/AnsiballZ_systemd.py'
Jan 29 09:24:32 compute-0 sudo[188958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v415: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:32 compute-0 ceph-mon[75183]: pgmap v415: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:32 compute-0 python3.9[188960]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 09:24:32 compute-0 systemd[1]: Reloading.
Jan 29 09:24:32 compute-0 systemd-sysv-generator[188993]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:24:32 compute-0 systemd-rc-local-generator[188990]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:24:32 compute-0 sudo[188958]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:33 compute-0 sudo[189148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ombezedhdbavpcxkrjsxpaxxcrkqzzee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678672.9863389-396-231337603501556/AnsiballZ_systemd.py'
Jan 29 09:24:33 compute-0 sudo[189148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:33 compute-0 python3.9[189150]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 29 09:24:33 compute-0 systemd[1]: Reloading.
Jan 29 09:24:33 compute-0 systemd-sysv-generator[189183]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:24:33 compute-0 systemd-rc-local-generator[189178]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:24:33 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 29 09:24:33 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 29 09:24:34 compute-0 sudo[189148]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:34 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:24:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v416: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:34 compute-0 ceph-mon[75183]: pgmap v416: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:34 compute-0 sudo[189341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvdcersgknunfnhnwqqglyusvnclxwor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678674.1601114-404-617858399567/AnsiballZ_systemd.py'
Jan 29 09:24:34 compute-0 sudo[189341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:34 compute-0 python3.9[189343]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 09:24:34 compute-0 sudo[189341]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:35 compute-0 sudo[189496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imavhmmdhbcraniigstpejmvypjuziwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678674.8653278-404-176534812638643/AnsiballZ_systemd.py'
Jan 29 09:24:35 compute-0 sudo[189496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:35 compute-0 python3.9[189498]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 09:24:35 compute-0 sudo[189496]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:35 compute-0 sudo[189651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxmentvrrpzdrdndtvvquduwdasfmypc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678675.6065536-404-244751016863250/AnsiballZ_systemd.py'
Jan 29 09:24:35 compute-0 sudo[189651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:36 compute-0 python3.9[189653]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 09:24:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v417: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:36 compute-0 sudo[189651]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:36 compute-0 podman[189655]: 2026-01-29 09:24:36.301832143 +0000 UTC m=+0.101765046 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 29 09:24:36 compute-0 ceph-mon[75183]: pgmap v417: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:36 compute-0 sudo[189831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noiqpoajijsnnjwhtuwhhbqjgjjpvbty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678676.3627083-404-87215196500377/AnsiballZ_systemd.py'
Jan 29 09:24:36 compute-0 sudo[189831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:36 compute-0 python3.9[189833]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 09:24:37 compute-0 sudo[189831]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:37 compute-0 sudo[189986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfujqbrnaqeodmkstyyllaxmhybwjmcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678677.1271698-404-102007193326371/AnsiballZ_systemd.py'
Jan 29 09:24:37 compute-0 sudo[189986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:37 compute-0 python3.9[189988]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 09:24:37 compute-0 sudo[189986]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:38 compute-0 sudo[190141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alazhhmbttylfnmmejofllunrbhyoyrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678677.9421399-404-245304267176313/AnsiballZ_systemd.py'
Jan 29 09:24:38 compute-0 sudo[190141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v418: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:38 compute-0 ceph-mon[75183]: pgmap v418: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:38 compute-0 python3.9[190143]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 09:24:38 compute-0 sudo[190141]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:38 compute-0 sudo[190296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prgqraujksvyjswwybsaejkmvzqhtzbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678678.6795125-404-23228089284611/AnsiballZ_systemd.py'
Jan 29 09:24:38 compute-0 sudo[190296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:24:39 compute-0 python3.9[190298]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 09:24:39 compute-0 sudo[190296]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:39 compute-0 sudo[190451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcgvjypmogybapzcqhasatvacazafxji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678679.435694-404-71231529780676/AnsiballZ_systemd.py'
Jan 29 09:24:39 compute-0 sudo[190451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:40 compute-0 python3.9[190453]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 09:24:40 compute-0 sudo[190451]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:40 compute-0 podman[190455]: 2026-01-29 09:24:40.117372148 +0000 UTC m=+0.057749330 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 09:24:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v419: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:40 compute-0 ceph-mon[75183]: pgmap v419: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:40 compute-0 sudo[190626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmhfiialowndaukegmermguxbhcbsuns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678680.2028992-404-162329629724691/AnsiballZ_systemd.py'
Jan 29 09:24:40 compute-0 sudo[190626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:40 compute-0 python3.9[190628]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 09:24:40 compute-0 sudo[190626]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:41 compute-0 sudo[190781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwfkrtjlhsxabpxzuhsyldxvimgyxqal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678680.9550717-404-154085763317106/AnsiballZ_systemd.py'
Jan 29 09:24:41 compute-0 sudo[190781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:41 compute-0 python3.9[190783]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 09:24:41 compute-0 sudo[190781]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:41 compute-0 sudo[190936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isrwyorvhnsnwpnkyuslgegaqrtfjwpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678681.7204497-404-253315032016857/AnsiballZ_systemd.py'
Jan 29 09:24:41 compute-0 sudo[190936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v420: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:42 compute-0 python3.9[190938]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 09:24:42 compute-0 sudo[190936]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:42 compute-0 ceph-mon[75183]: pgmap v420: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:42 compute-0 sudo[191091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qymsalfrlgjxmvljjvhqajnezygqyxjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678682.5063388-404-127841470817818/AnsiballZ_systemd.py'
Jan 29 09:24:42 compute-0 sudo[191091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:43 compute-0 python3.9[191093]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 09:24:43 compute-0 sudo[191091]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:43 compute-0 sudo[191246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jazuzxkpbhxqjhzpusurjeekhialavaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678683.240185-404-32175713495472/AnsiballZ_systemd.py'
Jan 29 09:24:43 compute-0 sudo[191246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:43 compute-0 python3.9[191248]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 09:24:43 compute-0 sudo[191246]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:24:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v421: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:44 compute-0 sudo[191401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erimjbowolokjqdglrpqsweptzsloioy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678683.9942517-404-120247813622154/AnsiballZ_systemd.py'
Jan 29 09:24:44 compute-0 sudo[191401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:44 compute-0 ceph-mon[75183]: pgmap v421: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:44 compute-0 python3.9[191403]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 09:24:44 compute-0 sudo[191401]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:45 compute-0 sudo[191556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqufnswlwcfbgrdaexnzsfghdaobesve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678684.9657023-506-76802110829739/AnsiballZ_file.py'
Jan 29 09:24:45 compute-0 sudo[191556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:45 compute-0 python3.9[191558]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:24:45 compute-0 sudo[191556]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:45 compute-0 sudo[191708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfpqlemjrznxqfheavqtopezknetpflf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678685.5614254-506-135419388185262/AnsiballZ_file.py'
Jan 29 09:24:45 compute-0 sudo[191708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:46 compute-0 python3.9[191710]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:24:46 compute-0 sudo[191708]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v422: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:46 compute-0 ceph-mon[75183]: pgmap v422: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:46 compute-0 sudo[191860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aluqpwjokwmqfrdocerejharaebcimer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678686.239344-506-101235893299966/AnsiballZ_file.py'
Jan 29 09:24:46 compute-0 sudo[191860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:46 compute-0 python3.9[191862]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:24:46 compute-0 sudo[191860]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:47 compute-0 sudo[192012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oatcmdlriqvuoqyqjzpabuuxosqwzjkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678686.8218515-506-51365953184405/AnsiballZ_file.py'
Jan 29 09:24:47 compute-0 sudo[192012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:47 compute-0 python3.9[192014]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:24:47 compute-0 sudo[192012]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:47 compute-0 sudo[192164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-junhtslwxcwqzjkuwlxipynvillhqpyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678687.3745887-506-76262714215925/AnsiballZ_file.py'
Jan 29 09:24:47 compute-0 sudo[192164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:47 compute-0 python3.9[192166]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:24:47 compute-0 sudo[192164]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:48 compute-0 sudo[192316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dctdnamsodzpoknfalzzqlmdmwhsqdri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678687.944355-506-194724159207033/AnsiballZ_file.py'
Jan 29 09:24:48 compute-0 sudo[192316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v423: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:48 compute-0 ceph-mon[75183]: pgmap v423: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:48 compute-0 python3.9[192318]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:24:48 compute-0 sudo[192316]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:49 compute-0 python3.9[192468]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:24:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:24:49 compute-0 sudo[192618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqmzhmocjndhboapquxvtdryxexgoosf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678689.3152647-557-101860152463416/AnsiballZ_stat.py'
Jan 29 09:24:49 compute-0 sudo[192618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:50 compute-0 python3.9[192620]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:24:50 compute-0 sudo[192618]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v424: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:50 compute-0 ceph-mon[75183]: pgmap v424: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:50 compute-0 sudo[192743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhujhqwhsmaycrxxcqksviaqixcxpbqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678689.3152647-557-101860152463416/AnsiballZ_copy.py'
Jan 29 09:24:50 compute-0 sudo[192743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:50 compute-0 python3.9[192745]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769678689.3152647-557-101860152463416/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:24:50 compute-0 sudo[192743]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:51 compute-0 sudo[192895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyimxvyvtzxbafeujaqothfoarjlbeui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678690.9518604-557-162096550878583/AnsiballZ_stat.py'
Jan 29 09:24:51 compute-0 sudo[192895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:51 compute-0 python3.9[192897]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:24:51 compute-0 sudo[192895]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:51 compute-0 sudo[193020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luokzqzolofgonpndlugvuoyxscpxyls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678690.9518604-557-162096550878583/AnsiballZ_copy.py'
Jan 29 09:24:51 compute-0 sudo[193020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:51 compute-0 python3.9[193022]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769678690.9518604-557-162096550878583/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:24:52 compute-0 sudo[193020]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v425: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:52 compute-0 ceph-mon[75183]: pgmap v425: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:52 compute-0 sudo[193172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzkcpvwzdozpzgwyoahmpgwxdrmywbsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678692.1106272-557-215298578744980/AnsiballZ_stat.py'
Jan 29 09:24:52 compute-0 sudo[193172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:52 compute-0 python3.9[193174]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:24:52 compute-0 sudo[193172]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:52 compute-0 sudo[193297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kckbcvrlepzekqgqrdudhqsyyhpamilc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678692.1106272-557-215298578744980/AnsiballZ_copy.py'
Jan 29 09:24:52 compute-0 sudo[193297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:53 compute-0 python3.9[193299]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769678692.1106272-557-215298578744980/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:24:53 compute-0 sudo[193297]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:53 compute-0 sudo[193449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cufosogadjmrzpozhnzrgvrjrtghxfig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678693.185064-557-156724621061260/AnsiballZ_stat.py'
Jan 29 09:24:53 compute-0 sudo[193449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:53 compute-0 python3.9[193451]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:24:53 compute-0 sudo[193449]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:54 compute-0 sudo[193574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrlhmvrnaslgepfluwbgblqtuunadnpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678693.185064-557-156724621061260/AnsiballZ_copy.py'
Jan 29 09:24:54 compute-0 sudo[193574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:54 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:24:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v426: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:54 compute-0 python3.9[193576]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769678693.185064-557-156724621061260/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:24:54 compute-0 sudo[193574]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:54 compute-0 ceph-mon[75183]: pgmap v426: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:54 compute-0 sudo[193726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjksrcmxnnkgfgrxukcpmroqiimjxnck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678694.42456-557-234709934579487/AnsiballZ_stat.py'
Jan 29 09:24:54 compute-0 sudo[193726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:54 compute-0 python3.9[193728]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:24:54 compute-0 sudo[193726]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:55 compute-0 sudo[193851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypsbnidhbtcjsaeepkkwdlevtuxanbuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678694.42456-557-234709934579487/AnsiballZ_copy.py'
Jan 29 09:24:55 compute-0 sudo[193851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:55 compute-0 python3.9[193853]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769678694.42456-557-234709934579487/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:24:55 compute-0 sudo[193851]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:55 compute-0 sudo[194003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cofojqzvcmxmfwnwckuoxxwoijbjkgco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678695.5349805-557-277123955856419/AnsiballZ_stat.py'
Jan 29 09:24:55 compute-0 sudo[194003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:55 compute-0 python3.9[194005]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:24:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:24:55
Jan 29 09:24:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:24:55 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:24:55 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['images', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'volumes', '.mgr']
Jan 29 09:24:55 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:24:56 compute-0 sudo[194003]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v427: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:56 compute-0 sudo[194128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrcaamjcjerayzdroawmettxyufypsvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678695.5349805-557-277123955856419/AnsiballZ_copy.py'
Jan 29 09:24:56 compute-0 sudo[194128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:56 compute-0 ceph-mon[75183]: pgmap v427: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:56 compute-0 python3.9[194130]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769678695.5349805-557-277123955856419/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:24:56 compute-0 sudo[194128]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:24:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:24:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:24:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:24:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:24:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:24:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:24:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:24:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:24:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:24:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:24:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:24:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:24:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:24:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:24:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:24:56 compute-0 sudo[194280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icvtntzsjiiosthdysdirextbdimwvck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678696.553768-557-188839711386660/AnsiballZ_stat.py'
Jan 29 09:24:56 compute-0 sudo[194280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:56 compute-0 python3.9[194282]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:24:56 compute-0 sudo[194280]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:57 compute-0 sudo[194403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjzvwurkrzeoulsggaquvvdfvsgcficx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678696.553768-557-188839711386660/AnsiballZ_copy.py'
Jan 29 09:24:57 compute-0 sudo[194403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:57 compute-0 python3.9[194405]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769678696.553768-557-188839711386660/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:24:57 compute-0 sudo[194403]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:57 compute-0 sudo[194555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxgnpkadshosammspcfaawyvrqggjgpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678697.5862036-557-151345835442235/AnsiballZ_stat.py'
Jan 29 09:24:57 compute-0 sudo[194555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:58 compute-0 python3.9[194557]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:24:58 compute-0 sudo[194555]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v428: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:58 compute-0 sudo[194680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymtfopvrlpulucxphjsiaqfqfehpaitm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678697.5862036-557-151345835442235/AnsiballZ_copy.py'
Jan 29 09:24:58 compute-0 sudo[194680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:58 compute-0 ceph-mon[75183]: pgmap v428: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:24:58 compute-0 python3.9[194682]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769678697.5862036-557-151345835442235/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:24:58 compute-0 sudo[194680]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:58 compute-0 sudo[194832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueagevxxaobprsgsynrjrqcgkczgqalc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678698.6425645-670-2261784451774/AnsiballZ_command.py'
Jan 29 09:24:58 compute-0 sudo[194832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:24:59 compute-0 python3.9[194834]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 29 09:24:59 compute-0 sudo[194832]: pam_unix(sudo:session): session closed for user root
Jan 29 09:24:59 compute-0 sudo[194985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rivepvzvpxonfpoorniiwjslorlcfiwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678699.4100008-679-109440040130815/AnsiballZ_file.py'
Jan 29 09:24:59 compute-0 sudo[194985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:24:59 compute-0 python3.9[194987]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:24:59 compute-0 sudo[194985]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:00 compute-0 sudo[195137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxnragtgzlibcfxnwijcwizgveavqktl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678699.9997046-679-140979081185448/AnsiballZ_file.py'
Jan 29 09:25:00 compute-0 sudo[195137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v429: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:00 compute-0 ceph-mon[75183]: pgmap v429: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:00 compute-0 python3.9[195139]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:00 compute-0 sudo[195137]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:00 compute-0 sudo[195289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loecmincoxuxtowtzpwwvoaklwcftxss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678700.5785444-679-116709001580630/AnsiballZ_file.py'
Jan 29 09:25:00 compute-0 sudo[195289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:01 compute-0 python3.9[195291]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:01 compute-0 sudo[195289]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:01 compute-0 sudo[195441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alqphqxpnusjjqahcbacfihjtpcuungm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678701.183079-679-163161369529436/AnsiballZ_file.py'
Jan 29 09:25:01 compute-0 sudo[195441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:25:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:25:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:25:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:25:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:25:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:25:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:25:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:25:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:25:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:25:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:25:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:25:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0578630957479565e-06 of space, bias 4.0, pg target 0.0012694357148975478 quantized to 16 (current 32)
Jan 29 09:25:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:25:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:25:01 compute-0 python3.9[195443]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:01 compute-0 sudo[195441]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:01 compute-0 sudo[195593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhwjcdwgbbumopzgcfmczfypwdcxlnto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678701.721175-679-91809159918462/AnsiballZ_file.py'
Jan 29 09:25:01 compute-0 sudo[195593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:02 compute-0 python3.9[195595]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:02 compute-0 sudo[195593]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v430: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:02 compute-0 ceph-mon[75183]: pgmap v430: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:02 compute-0 sudo[195745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsyyxpbhxskzjrlobebfsjjbeudthvtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678702.282522-679-159994875678012/AnsiballZ_file.py'
Jan 29 09:25:02 compute-0 sudo[195745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:02 compute-0 python3.9[195747]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:02 compute-0 sudo[195745]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:03 compute-0 sudo[195897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmwwewoxiyfdpalzvtgadwkruhrhumna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678702.834425-679-79813186889359/AnsiballZ_file.py'
Jan 29 09:25:03 compute-0 sudo[195897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:03 compute-0 python3.9[195899]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:03 compute-0 sudo[195897]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:03 compute-0 sudo[196049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfgsutzgszwsfkyiygslzqcztbmbelvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678703.4379115-679-217102068445593/AnsiballZ_file.py'
Jan 29 09:25:03 compute-0 sudo[196049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:03 compute-0 python3.9[196051]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:03 compute-0 sudo[196049]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:04 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:25:04 compute-0 sudo[196201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuhmdwnfjzaxujkokusrneidwwuwohfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678703.9894032-679-184827865454791/AnsiballZ_file.py'
Jan 29 09:25:04 compute-0 sudo[196201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v431: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:04 compute-0 ceph-mon[75183]: pgmap v431: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:04 compute-0 python3.9[196203]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:04 compute-0 sudo[196201]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:04 compute-0 sudo[196353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shaijjzwepsamixvdtpajeauzxfaarfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678704.5549972-679-184247878322217/AnsiballZ_file.py'
Jan 29 09:25:04 compute-0 sudo[196353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:04 compute-0 python3.9[196355]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:04 compute-0 sudo[196353]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:05 compute-0 sudo[196505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrgdcmwmdyyayuxaswqxwbyccqcbqmxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678705.0741959-679-121056883076782/AnsiballZ_file.py'
Jan 29 09:25:05 compute-0 sudo[196505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:05 compute-0 python3.9[196507]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:05 compute-0 sudo[196505]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:05 compute-0 sudo[196657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rswdbyiptemiiajndzgsglryouaiilcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678705.6081192-679-159308358088935/AnsiballZ_file.py'
Jan 29 09:25:05 compute-0 sudo[196657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:06 compute-0 python3.9[196659]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:06 compute-0 sudo[196657]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v432: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:06 compute-0 ceph-mon[75183]: pgmap v432: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:06 compute-0 sudo[196822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpnjzftizqwqtbdjkgykccsqsyfqfabf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678706.192571-679-32748312480596/AnsiballZ_file.py'
Jan 29 09:25:06 compute-0 sudo[196822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:06 compute-0 podman[196783]: 2026-01-29 09:25:06.496260147 +0000 UTC m=+0.086258650 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 29 09:25:06 compute-0 python3.9[196829]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:06 compute-0 sudo[196822]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:07 compute-0 sudo[196987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgfqipqngbgvasdruttmaoinzvyxlqnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678706.7820852-679-26355071765877/AnsiballZ_file.py'
Jan 29 09:25:07 compute-0 sudo[196987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:07 compute-0 python3.9[196989]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:07 compute-0 sudo[196987]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:07 compute-0 sudo[197139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sphatjdtimewfxyldvflphrzpgvlqwnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678707.374537-778-200172306697681/AnsiballZ_stat.py'
Jan 29 09:25:07 compute-0 sudo[197139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:07 compute-0 python3.9[197141]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:07 compute-0 sudo[197139]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:08 compute-0 sudo[197262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huullgclsuonbxrrysemszehxvyogvhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678707.374537-778-200172306697681/AnsiballZ_copy.py'
Jan 29 09:25:08 compute-0 sudo[197262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v433: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:08 compute-0 python3.9[197264]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678707.374537-778-200172306697681/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:08 compute-0 sudo[197262]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:08 compute-0 ceph-mon[75183]: pgmap v433: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:08 compute-0 sudo[197414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drygcxxqfvsrjypvlafaqafknkhflogg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678708.4202669-778-136684769563708/AnsiballZ_stat.py'
Jan 29 09:25:08 compute-0 sudo[197414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:08 compute-0 python3.9[197416]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:08 compute-0 sudo[197414]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:25:09.024 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:25:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:25:09.024 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:25:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:25:09.024 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:25:09 compute-0 sudo[197537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoetkdnuivoajktzncigwopidcaprdnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678708.4202669-778-136684769563708/AnsiballZ_copy.py'
Jan 29 09:25:09 compute-0 sudo[197537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:25:09 compute-0 python3.9[197539]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678708.4202669-778-136684769563708/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:09 compute-0 sudo[197537]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:09 compute-0 sudo[197689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bofagungpkggwtvzhlnpcywquesrdxdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678709.467332-778-157035538374482/AnsiballZ_stat.py'
Jan 29 09:25:09 compute-0 sudo[197689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:09 compute-0 python3.9[197691]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:09 compute-0 sudo[197689]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:10 compute-0 sudo[197812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebwlszjncygfwuxncejoticskfbhetpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678709.467332-778-157035538374482/AnsiballZ_copy.py'
Jan 29 09:25:10 compute-0 sudo[197812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:10 compute-0 podman[197814]: 2026-01-29 09:25:10.230982284 +0000 UTC m=+0.070860117 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 29 09:25:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v434: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:10 compute-0 ceph-mon[75183]: pgmap v434: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:10 compute-0 python3.9[197815]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678709.467332-778-157035538374482/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:10 compute-0 sudo[197812]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:10 compute-0 sudo[197983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxdjzhzwmbgnqultzomgginitezdxdeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678710.4986076-778-91967351511407/AnsiballZ_stat.py'
Jan 29 09:25:10 compute-0 sudo[197983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:10 compute-0 python3.9[197985]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:10 compute-0 sudo[197983]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:11 compute-0 sudo[198106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngcrhejtgnanzxpgqzwqwlhcqdizfvqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678710.4986076-778-91967351511407/AnsiballZ_copy.py'
Jan 29 09:25:11 compute-0 sudo[198106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:11 compute-0 python3.9[198108]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678710.4986076-778-91967351511407/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:11 compute-0 sudo[198106]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:11 compute-0 sudo[198258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jonjpvczagimdfpwooyvnzmxyslzljiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678711.6263204-778-123642497590315/AnsiballZ_stat.py'
Jan 29 09:25:11 compute-0 sudo[198258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:12 compute-0 python3.9[198260]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:12 compute-0 sudo[198258]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v435: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:12 compute-0 ceph-mon[75183]: pgmap v435: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:12 compute-0 sudo[198381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oopavsvnhrwhlzmqzybpkruevjhtkrsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678711.6263204-778-123642497590315/AnsiballZ_copy.py'
Jan 29 09:25:12 compute-0 sudo[198381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:12 compute-0 python3.9[198383]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678711.6263204-778-123642497590315/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:12 compute-0 sudo[198381]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:12 compute-0 sudo[198533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhysomrpkafhrqjbxlsdkefmulolmfok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678712.6399713-778-169539111229501/AnsiballZ_stat.py'
Jan 29 09:25:12 compute-0 sudo[198533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:13 compute-0 python3.9[198535]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:13 compute-0 sudo[198533]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:13 compute-0 sudo[198656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqtofaswovtkhyxpokeqqwkbsciwgfrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678712.6399713-778-169539111229501/AnsiballZ_copy.py'
Jan 29 09:25:13 compute-0 sudo[198656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:13 compute-0 python3.9[198658]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678712.6399713-778-169539111229501/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:13 compute-0 sudo[198656]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:13 compute-0 sudo[198808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssxdkeaneeuhjfbzbbuhjalvblwauldp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678713.6871154-778-181874392466048/AnsiballZ_stat.py'
Jan 29 09:25:13 compute-0 sudo[198808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:14 compute-0 python3.9[198810]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:14 compute-0 sudo[198808]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:14 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:25:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v436: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:14 compute-0 ceph-mon[75183]: pgmap v436: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:14 compute-0 sudo[198931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uecxcjcfxewguuvykslnfnuurhmxdzwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678713.6871154-778-181874392466048/AnsiballZ_copy.py'
Jan 29 09:25:14 compute-0 sudo[198931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:14 compute-0 python3.9[198933]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678713.6871154-778-181874392466048/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:14 compute-0 sudo[198931]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:14 compute-0 sudo[199083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvqtbnkyjzvsdmyzatzgbgtrgjxkxfmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678714.6993213-778-160381538692836/AnsiballZ_stat.py'
Jan 29 09:25:14 compute-0 sudo[199083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:15 compute-0 python3.9[199085]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:15 compute-0 sudo[199083]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:15 compute-0 sudo[199206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-semelzsrcqtabobpirmzqtmfbkvfloip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678714.6993213-778-160381538692836/AnsiballZ_copy.py'
Jan 29 09:25:15 compute-0 sudo[199206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:15 compute-0 python3.9[199208]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678714.6993213-778-160381538692836/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:15 compute-0 sudo[199206]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:16 compute-0 sudo[199358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vilxjdvahpfvrlzjywsoanbvkrbiwdtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678715.7802322-778-201977432428347/AnsiballZ_stat.py'
Jan 29 09:25:16 compute-0 sudo[199358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:16 compute-0 python3.9[199360]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:16 compute-0 sudo[199358]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v437: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:16 compute-0 ceph-mon[75183]: pgmap v437: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:16 compute-0 sudo[199481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bywkmjlfixdyfwztxmwjxrgvqtvrbkig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678715.7802322-778-201977432428347/AnsiballZ_copy.py'
Jan 29 09:25:16 compute-0 sudo[199481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:16 compute-0 python3.9[199483]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678715.7802322-778-201977432428347/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:16 compute-0 sudo[199481]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:17 compute-0 sudo[199537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:25:17 compute-0 sudo[199537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:25:17 compute-0 sudo[199537]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:17 compute-0 sudo[199585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 29 09:25:17 compute-0 sudo[199585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:25:17 compute-0 sudo[199683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqbloksmojvfhdlsrgdzlmpzdzfqyfkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678717.0316892-778-224234008572272/AnsiballZ_stat.py'
Jan 29 09:25:17 compute-0 sudo[199683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:17 compute-0 python3.9[199685]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:17 compute-0 sudo[199683]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:17 compute-0 podman[199730]: 2026-01-29 09:25:17.660680694 +0000 UTC m=+0.149050113 container exec 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 09:25:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v438: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:18 compute-0 podman[199730]: 2026-01-29 09:25:18.602457103 +0000 UTC m=+1.090826512 container exec_died 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 29 09:25:18 compute-0 sudo[199880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzyqckebjlckewzapmmstckuaibrnjqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678717.0316892-778-224234008572272/AnsiballZ_copy.py'
Jan 29 09:25:18 compute-0 sudo[199880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:18 compute-0 python3.9[199890]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678717.0316892-778-224234008572272/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:18 compute-0 sudo[199880]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:19 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:25:19 compute-0 sudo[199585]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:19 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:25:19 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:25:19 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:25:19 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:25:19 compute-0 sudo[200173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkjerbscrngjdjmhuzkflqevnfdgrgku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678719.0009174-778-29406531795534/AnsiballZ_stat.py'
Jan 29 09:25:19 compute-0 sudo[200173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:19 compute-0 sudo[200165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:25:19 compute-0 sudo[200165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:25:19 compute-0 sudo[200165]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:19 compute-0 sudo[200195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:25:19 compute-0 sudo[200195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:25:19 compute-0 python3.9[200192]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:19 compute-0 sudo[200173]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:19 compute-0 sudo[200371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwnlaphixktnczvcyjnmxkwutiyhnagj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678719.0009174-778-29406531795534/AnsiballZ_copy.py'
Jan 29 09:25:19 compute-0 sudo[200371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:19 compute-0 sudo[200195]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:19 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:25:19 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:25:19 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:25:19 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:25:19 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:25:19 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:25:19 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:25:19 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:25:19 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:25:19 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:25:19 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:25:19 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:25:19 compute-0 sudo[200374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:25:19 compute-0 sudo[200374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:25:19 compute-0 sudo[200374]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:19 compute-0 sudo[200399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:25:19 compute-0 sudo[200399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:25:19 compute-0 python3.9[200373]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678719.0009174-778-29406531795534/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:20 compute-0 sudo[200371]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:20 compute-0 podman[200483]: 2026-01-29 09:25:20.243720875 +0000 UTC m=+0.064723574 container create 0384e0c473815a93e3f670fd3703937553e83a9d9a66fc7464e21f182cc5963d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:25:20 compute-0 ceph-mon[75183]: pgmap v438: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:20 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:25:20 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:25:20 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:25:20 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:25:20 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:25:20 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:25:20 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:25:20 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:25:20 compute-0 systemd[1]: Started libpod-conmon-0384e0c473815a93e3f670fd3703937553e83a9d9a66fc7464e21f182cc5963d.scope.
Jan 29 09:25:20 compute-0 podman[200483]: 2026-01-29 09:25:20.201644685 +0000 UTC m=+0.022647394 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:25:20 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:25:20 compute-0 podman[200483]: 2026-01-29 09:25:20.358141711 +0000 UTC m=+0.179144430 container init 0384e0c473815a93e3f670fd3703937553e83a9d9a66fc7464e21f182cc5963d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 29 09:25:20 compute-0 podman[200483]: 2026-01-29 09:25:20.364950069 +0000 UTC m=+0.185952768 container start 0384e0c473815a93e3f670fd3703937553e83a9d9a66fc7464e21f182cc5963d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:25:20 compute-0 magical_kalam[200551]: 167 167
Jan 29 09:25:20 compute-0 systemd[1]: libpod-0384e0c473815a93e3f670fd3703937553e83a9d9a66fc7464e21f182cc5963d.scope: Deactivated successfully.
Jan 29 09:25:20 compute-0 podman[200483]: 2026-01-29 09:25:20.428901154 +0000 UTC m=+0.249903973 container attach 0384e0c473815a93e3f670fd3703937553e83a9d9a66fc7464e21f182cc5963d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_kalam, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:25:20 compute-0 podman[200483]: 2026-01-29 09:25:20.429467229 +0000 UTC m=+0.250469928 container died 0384e0c473815a93e3f670fd3703937553e83a9d9a66fc7464e21f182cc5963d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 09:25:20 compute-0 sudo[200618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioabpamjosodzudtpavyfnbievrsousj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678720.1502364-778-55753176648996/AnsiballZ_stat.py'
Jan 29 09:25:20 compute-0 sudo[200618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cf646e20bf58bd57f2ab3fee42454db3e341790661a0258d853f9a9b33232a0-merged.mount: Deactivated successfully.
Jan 29 09:25:20 compute-0 podman[200483]: 2026-01-29 09:25:20.53034107 +0000 UTC m=+0.351343789 container remove 0384e0c473815a93e3f670fd3703937553e83a9d9a66fc7464e21f182cc5963d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_kalam, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 29 09:25:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v439: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:20 compute-0 systemd[1]: libpod-conmon-0384e0c473815a93e3f670fd3703937553e83a9d9a66fc7464e21f182cc5963d.scope: Deactivated successfully.
Jan 29 09:25:20 compute-0 python3.9[200620]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:20 compute-0 sudo[200618]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:20 compute-0 podman[200629]: 2026-01-29 09:25:20.702374144 +0000 UTC m=+0.038566971 container create 811c20291a34aeeabce578837967995e508a962c74e7d882b95c66a407ea2683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:25:20 compute-0 systemd[1]: Started libpod-conmon-811c20291a34aeeabce578837967995e508a962c74e7d882b95c66a407ea2683.scope.
Jan 29 09:25:20 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5480a4c8d4cf47540f5ef5cc2e448d288afbe4ea957ae093f3fd529c6ac09d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5480a4c8d4cf47540f5ef5cc2e448d288afbe4ea957ae093f3fd529c6ac09d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5480a4c8d4cf47540f5ef5cc2e448d288afbe4ea957ae093f3fd529c6ac09d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5480a4c8d4cf47540f5ef5cc2e448d288afbe4ea957ae093f3fd529c6ac09d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:25:20 compute-0 podman[200629]: 2026-01-29 09:25:20.685329368 +0000 UTC m=+0.021522225 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5480a4c8d4cf47540f5ef5cc2e448d288afbe4ea957ae093f3fd529c6ac09d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:25:20 compute-0 podman[200629]: 2026-01-29 09:25:20.793974372 +0000 UTC m=+0.130167199 container init 811c20291a34aeeabce578837967995e508a962c74e7d882b95c66a407ea2683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:25:20 compute-0 podman[200629]: 2026-01-29 09:25:20.801380656 +0000 UTC m=+0.137573483 container start 811c20291a34aeeabce578837967995e508a962c74e7d882b95c66a407ea2683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 09:25:20 compute-0 podman[200629]: 2026-01-29 09:25:20.805046242 +0000 UTC m=+0.141239089 container attach 811c20291a34aeeabce578837967995e508a962c74e7d882b95c66a407ea2683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:25:20 compute-0 sudo[200771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cypbvyqwrqmblueflssokxsftdasbzrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678720.1502364-778-55753176648996/AnsiballZ_copy.py'
Jan 29 09:25:20 compute-0 sudo[200771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:21 compute-0 python3.9[200774]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678720.1502364-778-55753176648996/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:21 compute-0 sudo[200771]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:21 compute-0 pensive_bassi[200668]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:25:21 compute-0 pensive_bassi[200668]: --> All data devices are unavailable
Jan 29 09:25:21 compute-0 systemd[1]: libpod-811c20291a34aeeabce578837967995e508a962c74e7d882b95c66a407ea2683.scope: Deactivated successfully.
Jan 29 09:25:21 compute-0 conmon[200668]: conmon 811c20291a34aeeabce5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-811c20291a34aeeabce578837967995e508a962c74e7d882b95c66a407ea2683.scope/container/memory.events
Jan 29 09:25:21 compute-0 podman[200629]: 2026-01-29 09:25:21.25408448 +0000 UTC m=+0.590277307 container died 811c20291a34aeeabce578837967995e508a962c74e7d882b95c66a407ea2683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 09:25:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f5480a4c8d4cf47540f5ef5cc2e448d288afbe4ea957ae093f3fd529c6ac09d-merged.mount: Deactivated successfully.
Jan 29 09:25:21 compute-0 podman[200629]: 2026-01-29 09:25:21.303459493 +0000 UTC m=+0.639652320 container remove 811c20291a34aeeabce578837967995e508a962c74e7d882b95c66a407ea2683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 09:25:21 compute-0 systemd[1]: libpod-conmon-811c20291a34aeeabce578837967995e508a962c74e7d882b95c66a407ea2683.scope: Deactivated successfully.
Jan 29 09:25:21 compute-0 sudo[200399]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:21 compute-0 sudo[200876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:25:21 compute-0 sudo[200876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:25:21 compute-0 sudo[200876]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:21 compute-0 sudo[200904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:25:21 compute-0 sudo[200904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:25:21 compute-0 sudo[200999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uymxiurdbqnubzligsuwbmcavwxawowy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678721.296934-778-98359876781455/AnsiballZ_stat.py'
Jan 29 09:25:21 compute-0 sudo[200999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:21 compute-0 podman[201014]: 2026-01-29 09:25:21.6879361 +0000 UTC m=+0.051022687 container create 0b7c7a94c52c519d6bce74b4232cb61411ea54435c04b37d58e0daf86c879a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hugle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Jan 29 09:25:21 compute-0 python3.9[201001]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:21 compute-0 sudo[200999]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:21 compute-0 systemd[1]: Started libpod-conmon-0b7c7a94c52c519d6bce74b4232cb61411ea54435c04b37d58e0daf86c879a52.scope.
Jan 29 09:25:21 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:25:21 compute-0 podman[201014]: 2026-01-29 09:25:21.658188871 +0000 UTC m=+0.021275468 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:25:21 compute-0 podman[201014]: 2026-01-29 09:25:21.768098328 +0000 UTC m=+0.131184905 container init 0b7c7a94c52c519d6bce74b4232cb61411ea54435c04b37d58e0daf86c879a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hugle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 09:25:21 compute-0 podman[201014]: 2026-01-29 09:25:21.773966782 +0000 UTC m=+0.137053359 container start 0b7c7a94c52c519d6bce74b4232cb61411ea54435c04b37d58e0daf86c879a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hugle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:25:21 compute-0 pensive_hugle[201030]: 167 167
Jan 29 09:25:21 compute-0 systemd[1]: libpod-0b7c7a94c52c519d6bce74b4232cb61411ea54435c04b37d58e0daf86c879a52.scope: Deactivated successfully.
Jan 29 09:25:21 compute-0 podman[201014]: 2026-01-29 09:25:21.781467678 +0000 UTC m=+0.144554255 container attach 0b7c7a94c52c519d6bce74b4232cb61411ea54435c04b37d58e0daf86c879a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hugle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:25:21 compute-0 podman[201014]: 2026-01-29 09:25:21.781826858 +0000 UTC m=+0.144913435 container died 0b7c7a94c52c519d6bce74b4232cb61411ea54435c04b37d58e0daf86c879a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hugle, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 29 09:25:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1a4412d5ae16ccf3f01def4bc02c36205067c5cbffb415a8526686e3992c627-merged.mount: Deactivated successfully.
Jan 29 09:25:21 compute-0 podman[201014]: 2026-01-29 09:25:21.816335101 +0000 UTC m=+0.179421678 container remove 0b7c7a94c52c519d6bce74b4232cb61411ea54435c04b37d58e0daf86c879a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:25:21 compute-0 systemd[1]: libpod-conmon-0b7c7a94c52c519d6bce74b4232cb61411ea54435c04b37d58e0daf86c879a52.scope: Deactivated successfully.
Jan 29 09:25:21 compute-0 podman[201095]: 2026-01-29 09:25:21.947312901 +0000 UTC m=+0.044076615 container create 3efed2a6e925e6551fc9575422f3589fdae0b6145045c4fd545b7eea7446567f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:25:21 compute-0 systemd[1]: Started libpod-conmon-3efed2a6e925e6551fc9575422f3589fdae0b6145045c4fd545b7eea7446567f.scope.
Jan 29 09:25:21 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a900c5fcbe94612c970876abc6566f727d2a0def230ee7370d0f54b128d3b3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a900c5fcbe94612c970876abc6566f727d2a0def230ee7370d0f54b128d3b3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a900c5fcbe94612c970876abc6566f727d2a0def230ee7370d0f54b128d3b3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a900c5fcbe94612c970876abc6566f727d2a0def230ee7370d0f54b128d3b3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:25:22 compute-0 podman[201095]: 2026-01-29 09:25:21.926339872 +0000 UTC m=+0.023103616 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:25:22 compute-0 podman[201095]: 2026-01-29 09:25:22.025251362 +0000 UTC m=+0.122015096 container init 3efed2a6e925e6551fc9575422f3589fdae0b6145045c4fd545b7eea7446567f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:25:22 compute-0 podman[201095]: 2026-01-29 09:25:22.030605762 +0000 UTC m=+0.127369466 container start 3efed2a6e925e6551fc9575422f3589fdae0b6145045c4fd545b7eea7446567f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_clarke, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 29 09:25:22 compute-0 podman[201095]: 2026-01-29 09:25:22.035450559 +0000 UTC m=+0.132214293 container attach 3efed2a6e925e6551fc9575422f3589fdae0b6145045c4fd545b7eea7446567f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:25:22 compute-0 sudo[201195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhtvhoixqxauhkzpapcdhqymbaxccqej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678721.296934-778-98359876781455/AnsiballZ_copy.py'
Jan 29 09:25:22 compute-0 sudo[201195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:22 compute-0 ceph-mon[75183]: pgmap v439: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:22 compute-0 python3.9[201197]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678721.296934-778-98359876781455/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]: {
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:     "0": [
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:         {
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "devices": [
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "/dev/loop3"
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             ],
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "lv_name": "ceph_lv0",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "lv_size": "21470642176",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "name": "ceph_lv0",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "tags": {
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.cluster_name": "ceph",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.crush_device_class": "",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.encrypted": "0",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.objectstore": "bluestore",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.osd_id": "0",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.type": "block",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.vdo": "0",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.with_tpm": "0"
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             },
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "type": "block",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "vg_name": "ceph_vg0"
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:         }
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:     ],
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:     "1": [
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:         {
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "devices": [
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "/dev/loop4"
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             ],
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "lv_name": "ceph_lv1",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "lv_size": "21470642176",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "name": "ceph_lv1",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "tags": {
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.cluster_name": "ceph",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.crush_device_class": "",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.encrypted": "0",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.objectstore": "bluestore",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.osd_id": "1",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.type": "block",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.vdo": "0",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.with_tpm": "0"
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             },
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "type": "block",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "vg_name": "ceph_vg1"
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:         }
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:     ],
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:     "2": [
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:         {
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "devices": [
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "/dev/loop5"
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             ],
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "lv_name": "ceph_lv2",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "lv_size": "21470642176",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "name": "ceph_lv2",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "tags": {
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.cluster_name": "ceph",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.crush_device_class": "",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.encrypted": "0",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.objectstore": "bluestore",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.osd_id": "2",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.type": "block",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.vdo": "0",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:                 "ceph.with_tpm": "0"
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             },
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "type": "block",
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:             "vg_name": "ceph_vg2"
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:         }
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]:     ]
Jan 29 09:25:22 compute-0 inspiring_clarke[201140]: }
Jan 29 09:25:22 compute-0 sudo[201195]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:22 compute-0 systemd[1]: libpod-3efed2a6e925e6551fc9575422f3589fdae0b6145045c4fd545b7eea7446567f.scope: Deactivated successfully.
Jan 29 09:25:22 compute-0 podman[201095]: 2026-01-29 09:25:22.349756998 +0000 UTC m=+0.446520712 container died 3efed2a6e925e6551fc9575422f3589fdae0b6145045c4fd545b7eea7446567f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:25:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a900c5fcbe94612c970876abc6566f727d2a0def230ee7370d0f54b128d3b3d-merged.mount: Deactivated successfully.
Jan 29 09:25:22 compute-0 podman[201095]: 2026-01-29 09:25:22.473730224 +0000 UTC m=+0.570493938 container remove 3efed2a6e925e6551fc9575422f3589fdae0b6145045c4fd545b7eea7446567f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:25:22 compute-0 systemd[1]: libpod-conmon-3efed2a6e925e6551fc9575422f3589fdae0b6145045c4fd545b7eea7446567f.scope: Deactivated successfully.
Jan 29 09:25:22 compute-0 sudo[200904]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:22 compute-0 sudo[201292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:25:22 compute-0 sudo[201292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:25:22 compute-0 sudo[201292]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v440: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:22 compute-0 sudo[201340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:25:22 compute-0 sudo[201340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:25:22 compute-0 sudo[201415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uufrdgyvaiyyemecjzwuhlvrsjyykhmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678722.4556935-778-77904176580012/AnsiballZ_stat.py'
Jan 29 09:25:22 compute-0 sudo[201415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:22 compute-0 python3.9[201417]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:22 compute-0 sudo[201415]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:22 compute-0 podman[201430]: 2026-01-29 09:25:22.932161178 +0000 UTC m=+0.044810085 container create 62fb9299993c83180f5dc1f7fbdc2b560a5a33f5b3b77990b0409c02490a8e5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 09:25:22 compute-0 systemd[1]: Started libpod-conmon-62fb9299993c83180f5dc1f7fbdc2b560a5a33f5b3b77990b0409c02490a8e5b.scope.
Jan 29 09:25:23 compute-0 podman[201430]: 2026-01-29 09:25:22.91049385 +0000 UTC m=+0.023142777 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:25:23 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:25:23 compute-0 podman[201430]: 2026-01-29 09:25:23.041937652 +0000 UTC m=+0.154586579 container init 62fb9299993c83180f5dc1f7fbdc2b560a5a33f5b3b77990b0409c02490a8e5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_mclean, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 29 09:25:23 compute-0 podman[201430]: 2026-01-29 09:25:23.047885768 +0000 UTC m=+0.160534675 container start 62fb9299993c83180f5dc1f7fbdc2b560a5a33f5b3b77990b0409c02490a8e5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_mclean, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:25:23 compute-0 podman[201430]: 2026-01-29 09:25:23.052322774 +0000 UTC m=+0.164971701 container attach 62fb9299993c83180f5dc1f7fbdc2b560a5a33f5b3b77990b0409c02490a8e5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_mclean, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 09:25:23 compute-0 awesome_mclean[201469]: 167 167
Jan 29 09:25:23 compute-0 systemd[1]: libpod-62fb9299993c83180f5dc1f7fbdc2b560a5a33f5b3b77990b0409c02490a8e5b.scope: Deactivated successfully.
Jan 29 09:25:23 compute-0 podman[201498]: 2026-01-29 09:25:23.093912613 +0000 UTC m=+0.028890248 container died 62fb9299993c83180f5dc1f7fbdc2b560a5a33f5b3b77990b0409c02490a8e5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_mclean, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:25:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-790ec08a9f7559c0487910a913b330b6280a411ef8baed28bb47268b7f662dcf-merged.mount: Deactivated successfully.
Jan 29 09:25:23 compute-0 podman[201498]: 2026-01-29 09:25:23.13621887 +0000 UTC m=+0.071196495 container remove 62fb9299993c83180f5dc1f7fbdc2b560a5a33f5b3b77990b0409c02490a8e5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 29 09:25:23 compute-0 systemd[1]: libpod-conmon-62fb9299993c83180f5dc1f7fbdc2b560a5a33f5b3b77990b0409c02490a8e5b.scope: Deactivated successfully.
Jan 29 09:25:23 compute-0 sudo[201591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loodbfcxlhukqnovufxrrmbrmmkbknxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678722.4556935-778-77904176580012/AnsiballZ_copy.py'
Jan 29 09:25:23 compute-0 sudo[201591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:23 compute-0 podman[201594]: 2026-01-29 09:25:23.26340023 +0000 UTC m=+0.037348628 container create 7e81f1a372b52947553afb963d1524ca7d34761db4f7932535297c632b327d3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 29 09:25:23 compute-0 systemd[1]: Started libpod-conmon-7e81f1a372b52947553afb963d1524ca7d34761db4f7932535297c632b327d3a.scope.
Jan 29 09:25:23 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/621b89c89d3a71816da7eec006bedc665cae17a6fb3e862aebb1a6f91d8bf139/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/621b89c89d3a71816da7eec006bedc665cae17a6fb3e862aebb1a6f91d8bf139/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/621b89c89d3a71816da7eec006bedc665cae17a6fb3e862aebb1a6f91d8bf139/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/621b89c89d3a71816da7eec006bedc665cae17a6fb3e862aebb1a6f91d8bf139/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:25:23 compute-0 podman[201594]: 2026-01-29 09:25:23.343517998 +0000 UTC m=+0.117466416 container init 7e81f1a372b52947553afb963d1524ca7d34761db4f7932535297c632b327d3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_varahamihira, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 09:25:23 compute-0 podman[201594]: 2026-01-29 09:25:23.24926837 +0000 UTC m=+0.023216788 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:25:23 compute-0 podman[201594]: 2026-01-29 09:25:23.351311972 +0000 UTC m=+0.125260370 container start 7e81f1a372b52947553afb963d1524ca7d34761db4f7932535297c632b327d3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_varahamihira, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:25:23 compute-0 podman[201594]: 2026-01-29 09:25:23.356350274 +0000 UTC m=+0.130298692 container attach 7e81f1a372b52947553afb963d1524ca7d34761db4f7932535297c632b327d3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 29 09:25:23 compute-0 python3.9[201601]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678722.4556935-778-77904176580012/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:23 compute-0 sudo[201591]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:24 compute-0 lvm[201839]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:25:24 compute-0 lvm[201839]: VG ceph_vg0 finished
Jan 29 09:25:24 compute-0 lvm[201842]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:25:24 compute-0 lvm[201842]: VG ceph_vg1 finished
Jan 29 09:25:24 compute-0 python3.9[201821]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:25:24 compute-0 lvm[201844]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:25:24 compute-0 lvm[201844]: VG ceph_vg2 finished
Jan 29 09:25:24 compute-0 bold_varahamihira[201613]: {}
Jan 29 09:25:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:25:24 compute-0 systemd[1]: libpod-7e81f1a372b52947553afb963d1524ca7d34761db4f7932535297c632b327d3a.scope: Deactivated successfully.
Jan 29 09:25:24 compute-0 systemd[1]: libpod-7e81f1a372b52947553afb963d1524ca7d34761db4f7932535297c632b327d3a.scope: Consumed 1.196s CPU time.
Jan 29 09:25:24 compute-0 podman[201594]: 2026-01-29 09:25:24.18385636 +0000 UTC m=+0.957804758 container died 7e81f1a372b52947553afb963d1524ca7d34761db4f7932535297c632b327d3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_varahamihira, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Jan 29 09:25:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-621b89c89d3a71816da7eec006bedc665cae17a6fb3e862aebb1a6f91d8bf139-merged.mount: Deactivated successfully.
Jan 29 09:25:24 compute-0 podman[201594]: 2026-01-29 09:25:24.2697922 +0000 UTC m=+1.043740598 container remove 7e81f1a372b52947553afb963d1524ca7d34761db4f7932535297c632b327d3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_varahamihira, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Jan 29 09:25:24 compute-0 ceph-mon[75183]: pgmap v440: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:24 compute-0 systemd[1]: libpod-conmon-7e81f1a372b52947553afb963d1524ca7d34761db4f7932535297c632b327d3a.scope: Deactivated successfully.
Jan 29 09:25:24 compute-0 sudo[201340]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:25:24 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:25:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:25:24 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:25:24 compute-0 sudo[201938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:25:24 compute-0 sudo[201938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:25:24 compute-0 sudo[201938]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v441: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:24 compute-0 sudo[202036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfqdvxjdiqiwknlowcbowfzcggqwzwzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678724.2459507-984-86170233674123/AnsiballZ_seboolean.py'
Jan 29 09:25:24 compute-0 sudo[202036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:24 compute-0 python3.9[202038]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 29 09:25:25 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:25:25 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:25:25 compute-0 ceph-mon[75183]: pgmap v441: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:26 compute-0 sudo[202036]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:26 compute-0 sudo[202192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bimcmgdiykfojwjmasgspmfrkznofqgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678726.1988015-992-53510701200264/AnsiballZ_copy.py'
Jan 29 09:25:26 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 29 09:25:26 compute-0 sudo[202192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:25:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:25:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:25:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:25:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:25:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:25:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v442: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:26 compute-0 python3.9[202194]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:26 compute-0 sudo[202192]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:27 compute-0 sudo[202344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otbjfpebmogbxoigmafrreoledkybfui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678726.7912536-992-275104868422022/AnsiballZ_copy.py'
Jan 29 09:25:27 compute-0 sudo[202344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:27 compute-0 python3.9[202346]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:27 compute-0 sudo[202344]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:27 compute-0 ceph-mon[75183]: pgmap v442: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:27 compute-0 sudo[202496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kowzujfgikhqbygeaparvpegkctmxfrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678727.5300527-992-245007850916735/AnsiballZ_copy.py'
Jan 29 09:25:27 compute-0 sudo[202496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:27 compute-0 python3.9[202498]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:27 compute-0 sudo[202496]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:28 compute-0 sudo[202648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmdkimphaiuryumvjohqakwpbpldecel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678728.0803142-992-137926914778409/AnsiballZ_copy.py'
Jan 29 09:25:28 compute-0 sudo[202648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:28 compute-0 python3.9[202650]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:28 compute-0 sudo[202648]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v443: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:28 compute-0 sudo[202800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbehqxrnytiwhnjrawrxacbpldpoduoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678728.6810846-992-177769417421880/AnsiballZ_copy.py'
Jan 29 09:25:28 compute-0 sudo[202800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:29 compute-0 python3.9[202802]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:25:29 compute-0 sudo[202800]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:29 compute-0 sudo[202952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bawkuoehmyjrfqqkzyqabqyxelsovgkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678729.3252513-1028-154324680571328/AnsiballZ_copy.py'
Jan 29 09:25:29 compute-0 sudo[202952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:29 compute-0 ceph-mon[75183]: pgmap v443: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:29 compute-0 python3.9[202954]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:29 compute-0 sudo[202952]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:30 compute-0 sudo[203104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riesivxyuidnrztvgjajesqoazhpjikh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678729.981784-1028-220235312112084/AnsiballZ_copy.py'
Jan 29 09:25:30 compute-0 sudo[203104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:30 compute-0 python3.9[203106]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:30 compute-0 sudo[203104]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v444: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:30 compute-0 sudo[203256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqqpawptuimernvnryeilftffpojgvdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678730.5596066-1028-20955871543088/AnsiballZ_copy.py'
Jan 29 09:25:30 compute-0 sudo[203256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:30 compute-0 python3.9[203258]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:31 compute-0 sudo[203256]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:31 compute-0 sudo[203408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plzivqdecebctvekytntyjlcoxrrhqvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678731.1360857-1028-5419615617449/AnsiballZ_copy.py'
Jan 29 09:25:31 compute-0 sudo[203408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:31 compute-0 python3.9[203410]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:31 compute-0 sudo[203408]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:31 compute-0 ceph-mon[75183]: pgmap v444: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:31 compute-0 sudo[203560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gktnfiwmlmspyfyotikflxcffmoimulo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678731.7272525-1028-89858772805169/AnsiballZ_copy.py'
Jan 29 09:25:31 compute-0 sudo[203560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:32 compute-0 python3.9[203562]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:32 compute-0 sudo[203560]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v445: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:32 compute-0 sudo[203712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwdvbabyclsuqugutijzzjouryruftvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678732.3639836-1064-33158262528915/AnsiballZ_systemd.py'
Jan 29 09:25:32 compute-0 sudo[203712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:32 compute-0 python3.9[203714]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 09:25:32 compute-0 systemd[1]: Reloading.
Jan 29 09:25:33 compute-0 systemd-rc-local-generator[203731]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:25:33 compute-0 systemd-sysv-generator[203735]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:25:33 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 29 09:25:33 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 29 09:25:33 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 29 09:25:33 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 29 09:25:33 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 29 09:25:33 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 29 09:25:33 compute-0 sudo[203712]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:33 compute-0 sudo[203905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxxeififsuufuskvsusngkrjwwdtndea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678733.4954324-1064-35131944305218/AnsiballZ_systemd.py'
Jan 29 09:25:33 compute-0 sudo[203905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:33 compute-0 ceph-mon[75183]: pgmap v445: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:34 compute-0 python3.9[203907]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 09:25:34 compute-0 systemd[1]: Reloading.
Jan 29 09:25:34 compute-0 systemd-rc-local-generator[203936]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:25:34 compute-0 systemd-sysv-generator[203940]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:25:34 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:25:34 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 29 09:25:34 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 29 09:25:34 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 29 09:25:34 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 29 09:25:34 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 29 09:25:34 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 29 09:25:34 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 29 09:25:34 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 29 09:25:34 compute-0 sudo[203905]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v446: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:34 compute-0 sudo[204122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heimmurpqnmbzsqqfckkimzxbhtrixmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678734.606142-1064-163555899585818/AnsiballZ_systemd.py'
Jan 29 09:25:34 compute-0 sudo[204122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:35 compute-0 python3.9[204124]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 09:25:35 compute-0 systemd[1]: Reloading.
Jan 29 09:25:35 compute-0 systemd-sysv-generator[204152]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:25:35 compute-0 systemd-rc-local-generator[204148]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:25:35 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 29 09:25:35 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 29 09:25:35 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 29 09:25:35 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 29 09:25:35 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 29 09:25:35 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 29 09:25:35 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 29 09:25:35 compute-0 sudo[204122]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:35 compute-0 ceph-mon[75183]: pgmap v446: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:35 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 29 09:25:35 compute-0 auditd[698]: Audit daemon rotating log files
Jan 29 09:25:35 compute-0 sudo[204333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihbhvoucwddziptbdokgpyentulmmjtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678735.6975958-1064-106976290415601/AnsiballZ_systemd.py'
Jan 29 09:25:35 compute-0 sudo[204333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:36 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged.
Jan 29 09:25:36 compute-0 systemd[1]: Started dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 29 09:25:36 compute-0 python3.9[204335]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 09:25:36 compute-0 systemd[1]: Reloading.
Jan 29 09:25:36 compute-0 systemd-rc-local-generator[204372]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:25:36 compute-0 systemd-sysv-generator[204375]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:25:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v447: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:36 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 29 09:25:36 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 29 09:25:36 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 29 09:25:36 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 29 09:25:36 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 29 09:25:36 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 29 09:25:36 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 29 09:25:36 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 29 09:25:36 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 29 09:25:36 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 29 09:25:36 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 29 09:25:36 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 29 09:25:36 compute-0 podman[204381]: 2026-01-29 09:25:36.689421705 +0000 UTC m=+0.090514841 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 29 09:25:36 compute-0 sudo[204333]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:37 compute-0 setroubleshoot[204160]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 14534f11-073d-4a62-a978-906464cec032
Jan 29 09:25:37 compute-0 setroubleshoot[204160]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 29 09:25:37 compute-0 setroubleshoot[204160]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 14534f11-073d-4a62-a978-906464cec032
Jan 29 09:25:37 compute-0 sudo[204582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyexhooyowhaozwnxoxzizodtzmhlakj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678736.8315825-1064-80700775560739/AnsiballZ_systemd.py'
Jan 29 09:25:37 compute-0 setroubleshoot[204160]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 29 09:25:37 compute-0 sudo[204582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:37 compute-0 python3.9[204584]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 09:25:37 compute-0 systemd[1]: Reloading.
Jan 29 09:25:37 compute-0 systemd-sysv-generator[204614]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:25:37 compute-0 systemd-rc-local-generator[204609]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:25:37 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 29 09:25:37 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 29 09:25:37 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 29 09:25:37 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 29 09:25:37 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 29 09:25:37 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 29 09:25:37 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 29 09:25:37 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 29 09:25:37 compute-0 sudo[204582]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:37 compute-0 ceph-mon[75183]: pgmap v447: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:38 compute-0 sudo[204794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljnrfluttvobgzfgpezhmagykwctoozu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678738.0534816-1101-62115408474284/AnsiballZ_file.py'
Jan 29 09:25:38 compute-0 sudo[204794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:38 compute-0 python3.9[204796]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:38 compute-0 sudo[204794]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v448: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:38 compute-0 sudo[204946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phktubmvwlvtvdxctexrfjkzmxjltcth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678738.6657465-1109-185866688984452/AnsiballZ_find.py'
Jan 29 09:25:38 compute-0 sudo[204946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:39 compute-0 python3.9[204948]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 29 09:25:39 compute-0 sudo[204946]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:25:39 compute-0 sudo[205098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utjzmqjquviwrygelpjtdsdmhxnsvdjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678739.2967212-1117-131885894123849/AnsiballZ_command.py'
Jan 29 09:25:39 compute-0 sudo[205098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:39 compute-0 python3.9[205100]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:25:39 compute-0 sudo[205098]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:39 compute-0 ceph-mon[75183]: pgmap v448: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:40 compute-0 python3.9[205254]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 29 09:25:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v449: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:40 compute-0 podman[205378]: 2026-01-29 09:25:40.995515033 +0000 UTC m=+0.093216452 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 29 09:25:41 compute-0 python3.9[205414]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:41 compute-0 python3.9[205544]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769678740.6459925-1136-10257847009652/.source.xml follow=False _original_basename=secret.xml.j2 checksum=46be7e9f839ac3e3cedef48142ad87f960c47cff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:41 compute-0 ceph-mon[75183]: pgmap v449: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:42 compute-0 sudo[205694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiazwqibglpralwzbaodqsngqlixxfgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678741.7941751-1151-205851284325570/AnsiballZ_command.py'
Jan 29 09:25:42 compute-0 sudo[205694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:42 compute-0 python3.9[205696]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 3fdce3ca-565d-5459-88e8-1ffe58b48437
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:25:42 compute-0 polkitd[43507]: Registered Authentication Agent for unix-process:205698:310164 (system bus name :1.2447 [pkttyagent --process 205698 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 29 09:25:42 compute-0 polkitd[43507]: Unregistered Authentication Agent for unix-process:205698:310164 (system bus name :1.2447, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 29 09:25:42 compute-0 polkitd[43507]: Registered Authentication Agent for unix-process:205697:310164 (system bus name :1.2448 [pkttyagent --process 205697 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 29 09:25:42 compute-0 polkitd[43507]: Unregistered Authentication Agent for unix-process:205697:310164 (system bus name :1.2448, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 29 09:25:42 compute-0 sudo[205694]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v450: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:42 compute-0 python3.9[205858]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:43 compute-0 sudo[206008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apamfwrajlyqldxhsdowflbgmvzrzyzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678743.089927-1167-171106459664394/AnsiballZ_command.py'
Jan 29 09:25:43 compute-0 sudo[206008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:43 compute-0 sudo[206008]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:43 compute-0 ceph-mon[75183]: pgmap v450: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:43 compute-0 sudo[206161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esvlvwvvookcbnqtljrarblvygciasqm ; FSID=3fdce3ca-565d-5459-88e8-1ffe58b48437 KEY=AQAdJHtpAAAAABAAaI37n/Z6PSZlO/27IIsTqw== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678743.7356184-1175-29069026528204/AnsiballZ_command.py'
Jan 29 09:25:43 compute-0 sudo[206161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:25:44 compute-0 polkitd[43507]: Registered Authentication Agent for unix-process:206164:310358 (system bus name :1.2451 [pkttyagent --process 206164 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 29 09:25:44 compute-0 polkitd[43507]: Unregistered Authentication Agent for unix-process:206164:310358 (system bus name :1.2451, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 29 09:25:44 compute-0 sudo[206161]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v451: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:44 compute-0 sudo[206319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtonetejgbrvcygpjmiqzjrkojbiurrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678744.558226-1183-161038577333693/AnsiballZ_copy.py'
Jan 29 09:25:44 compute-0 sudo[206319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:44 compute-0 python3.9[206321]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:45 compute-0 sudo[206319]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:45 compute-0 sudo[206471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbpaaxhkerezyoihppoblzjhrizzaocy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678745.1688008-1191-137616567650531/AnsiballZ_stat.py'
Jan 29 09:25:45 compute-0 sudo[206471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:45 compute-0 python3.9[206473]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:45 compute-0 sudo[206471]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:45 compute-0 sudo[206594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvhtuctdurgjprggfechsgxzbytzsosm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678745.1688008-1191-137616567650531/AnsiballZ_copy.py'
Jan 29 09:25:45 compute-0 sudo[206594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:45 compute-0 ceph-mon[75183]: pgmap v451: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:46 compute-0 python3.9[206596]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769678745.1688008-1191-137616567650531/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:46 compute-0 sudo[206594]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:46 compute-0 sudo[206746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhlhrbclqxshjxvrjgnaxhzjmqijulgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678746.3176384-1207-124647853956625/AnsiballZ_file.py'
Jan 29 09:25:46 compute-0 sudo[206746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v452: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:46 compute-0 python3.9[206748]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:46 compute-0 sudo[206746]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:47 compute-0 systemd[1]: dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 29 09:25:47 compute-0 systemd[1]: dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.020s CPU time.
Jan 29 09:25:47 compute-0 sudo[206898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wthwttuvdldgopknknxkizmoinmbvfpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678746.8985434-1215-59114107339094/AnsiballZ_stat.py'
Jan 29 09:25:47 compute-0 sudo[206898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:47 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 29 09:25:47 compute-0 python3.9[206900]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:47 compute-0 sudo[206898]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:47 compute-0 sudo[206976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqcblimajwrnzsazbmmuismsswuoepfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678746.8985434-1215-59114107339094/AnsiballZ_file.py'
Jan 29 09:25:47 compute-0 sudo[206976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:47 compute-0 python3.9[206978]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:47 compute-0 sudo[206976]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:47 compute-0 ceph-mon[75183]: pgmap v452: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:48 compute-0 sudo[207128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlcdsplvkthtkrkstvtlpncqorprcnja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678747.9420083-1227-131178991220385/AnsiballZ_stat.py'
Jan 29 09:25:48 compute-0 sudo[207128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:48 compute-0 python3.9[207130]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:48 compute-0 sudo[207128]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:48 compute-0 sudo[207206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tejklvyeedslnmdgyhrphiwuqqljfign ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678747.9420083-1227-131178991220385/AnsiballZ_file.py'
Jan 29 09:25:48 compute-0 sudo[207206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v453: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:48 compute-0 python3.9[207208]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.zf8se3um recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:48 compute-0 sudo[207206]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:25:49 compute-0 sudo[207358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oewfrjifqhqvttqymrlzcsmkrubmucvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678748.9241073-1239-118952017166752/AnsiballZ_stat.py'
Jan 29 09:25:49 compute-0 sudo[207358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:49 compute-0 python3.9[207360]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:49 compute-0 sudo[207358]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:49 compute-0 sudo[207436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ierserlsvvgdftdldvlghoghgreorerd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678748.9241073-1239-118952017166752/AnsiballZ_file.py'
Jan 29 09:25:49 compute-0 sudo[207436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:49 compute-0 python3.9[207438]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:49 compute-0 sudo[207436]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:50 compute-0 ceph-mon[75183]: pgmap v453: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:50 compute-0 sudo[207588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaudcgmrgqsrjqvyjwuwdlssxfbxaoox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678749.9350307-1252-238863724004145/AnsiballZ_command.py'
Jan 29 09:25:50 compute-0 sudo[207588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:50 compute-0 python3.9[207590]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:25:50 compute-0 sudo[207588]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v454: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:50 compute-0 sudo[207741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lplubleiqsnrisgwpowmyqtnpsqvxdus ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769678750.5371194-1260-106712301525142/AnsiballZ_edpm_nftables_from_files.py'
Jan 29 09:25:50 compute-0 sudo[207741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:51 compute-0 python3[207743]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 29 09:25:51 compute-0 sudo[207741]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:51 compute-0 sudo[207893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jttyjzpwvzscwwrbzoqusdegniyypaai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678751.3720798-1268-177372466713589/AnsiballZ_stat.py'
Jan 29 09:25:51 compute-0 sudo[207893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:51 compute-0 python3.9[207895]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:51 compute-0 sudo[207893]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:52 compute-0 ceph-mon[75183]: pgmap v454: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:52 compute-0 sudo[207971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jybrgoymkbvcmhinlksdqphnywgossce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678751.3720798-1268-177372466713589/AnsiballZ_file.py'
Jan 29 09:25:52 compute-0 sudo[207971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:52 compute-0 python3.9[207973]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:52 compute-0 sudo[207971]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v455: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:52 compute-0 sudo[208123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxnxznbtqedmfqsbxdhxymwsumfttfne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678752.3824387-1280-203704699146414/AnsiballZ_stat.py'
Jan 29 09:25:52 compute-0 sudo[208123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:52 compute-0 python3.9[208125]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:52 compute-0 sudo[208123]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:53 compute-0 sudo[208248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnuyxxmpzxrdmvopiexjvzqnikmxfakz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678752.3824387-1280-203704699146414/AnsiballZ_copy.py'
Jan 29 09:25:53 compute-0 sudo[208248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:53 compute-0 python3.9[208250]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678752.3824387-1280-203704699146414/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:53 compute-0 sudo[208248]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:53 compute-0 sudo[208400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfwrjmwuximcxslofthrfqlsyfeiiajd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678753.5032184-1295-119548635484824/AnsiballZ_stat.py'
Jan 29 09:25:53 compute-0 sudo[208400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:53 compute-0 python3.9[208402]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:53 compute-0 sudo[208400]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:54 compute-0 ceph-mon[75183]: pgmap v455: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:54 compute-0 sudo[208478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdfqenpjtekizmuhrnrvtcjryydvoumi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678753.5032184-1295-119548635484824/AnsiballZ_file.py'
Jan 29 09:25:54 compute-0 sudo[208478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:54 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:25:54 compute-0 python3.9[208480]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:54 compute-0 sudo[208478]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v456: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:54 compute-0 sudo[208630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtqdxwsipcsyecbhqijeuuvuudqjejvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678754.5449777-1307-115174435861082/AnsiballZ_stat.py'
Jan 29 09:25:54 compute-0 sudo[208630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:54 compute-0 python3.9[208632]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:55 compute-0 sudo[208630]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:55 compute-0 sudo[208708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djewfllbamzfjldqfqllcedwyczsqmgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678754.5449777-1307-115174435861082/AnsiballZ_file.py'
Jan 29 09:25:55 compute-0 sudo[208708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:55 compute-0 python3.9[208710]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:55 compute-0 sudo[208708]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:55 compute-0 sudo[208860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mirjzozqdckvthnxwcohlpwbhaqxonpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678755.5725057-1319-28294953687520/AnsiballZ_stat.py'
Jan 29 09:25:55 compute-0 sudo[208860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:25:55
Jan 29 09:25:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:25:55 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:25:55 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['vms', '.mgr', 'cephfs.cephfs.data', 'volumes', 'backups', 'images', 'cephfs.cephfs.meta']
Jan 29 09:25:55 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:25:56 compute-0 ceph-mon[75183]: pgmap v456: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:56 compute-0 python3.9[208862]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:25:56 compute-0 sudo[208860]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:56 compute-0 sudo[208985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oytlvlepabmzgxlnndxrujtalbhfwoww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678755.5725057-1319-28294953687520/AnsiballZ_copy.py'
Jan 29 09:25:56 compute-0 sudo[208985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:25:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:25:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:25:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:25:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:25:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:25:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v457: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:25:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:25:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:25:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:25:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:25:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:25:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:25:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:25:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:25:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:25:56 compute-0 python3.9[208987]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769678755.5725057-1319-28294953687520/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:56 compute-0 sudo[208985]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:57 compute-0 sudo[209137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjzvkpgvjppnurlllxfflsttjvpcknph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678756.8347282-1334-222616661552810/AnsiballZ_file.py'
Jan 29 09:25:57 compute-0 sudo[209137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:57 compute-0 python3.9[209139]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:57 compute-0 sudo[209137]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:57 compute-0 sudo[209289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqbzcbbrfedzrtvbyguioxzywibqvzho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678757.4327402-1342-142209279997528/AnsiballZ_command.py'
Jan 29 09:25:57 compute-0 sudo[209289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:57 compute-0 python3.9[209291]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:25:57 compute-0 sudo[209289]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:58 compute-0 ceph-mon[75183]: pgmap v457: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:58 compute-0 sudo[209444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvqocbuaumtmvqwxofbroftjlwqpccps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678758.0646281-1350-258640189968827/AnsiballZ_blockinfile.py'
Jan 29 09:25:58 compute-0 sudo[209444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v458: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:25:58 compute-0 python3.9[209446]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:25:58 compute-0 sudo[209444]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:59 compute-0 sudo[209596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udfxqglzibyaueyfyiecmveyopmmhoiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678758.8447833-1359-72425359824492/AnsiballZ_command.py'
Jan 29 09:25:59 compute-0 sudo[209596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:25:59 compute-0 python3.9[209598]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:25:59 compute-0 sudo[209596]: pam_unix(sudo:session): session closed for user root
Jan 29 09:25:59 compute-0 sudo[209749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syhobpbveriwwikxtueutkmuytoyqjyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678759.3858492-1367-268482851193851/AnsiballZ_stat.py'
Jan 29 09:25:59 compute-0 sudo[209749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:25:59 compute-0 python3.9[209751]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:25:59 compute-0 sudo[209749]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:00 compute-0 ceph-mon[75183]: pgmap v458: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:00 compute-0 sudo[209903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bayerfyaznqsvzsywhbcoirdhuvqmtql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678759.958317-1375-261968592532035/AnsiballZ_command.py'
Jan 29 09:26:00 compute-0 sudo[209903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:00 compute-0 python3.9[209905]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:26:00 compute-0 sudo[209903]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v459: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:00 compute-0 sudo[210058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydwearcfredulxcnkdhpkhsnippgsuow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678760.53299-1383-195636031280530/AnsiballZ_file.py'
Jan 29 09:26:00 compute-0 sudo[210058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:00 compute-0 python3.9[210060]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:26:00 compute-0 sudo[210058]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:01 compute-0 sudo[210210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvxwqtojnwavrhnpstzkzwgdfjtgelri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678761.2319694-1391-267405821944723/AnsiballZ_stat.py'
Jan 29 09:26:01 compute-0 sudo[210210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:26:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:26:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:26:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:26:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:26:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:26:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:26:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:26:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:26:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:26:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:26:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:26:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0578630957479565e-06 of space, bias 4.0, pg target 0.0012694357148975478 quantized to 16 (current 32)
Jan 29 09:26:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:26:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:26:01 compute-0 python3.9[210212]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:26:01 compute-0 sudo[210210]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:01 compute-0 sudo[210333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zicmsjxwpstuvoijkddcxtxedlizruul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678761.2319694-1391-267405821944723/AnsiballZ_copy.py'
Jan 29 09:26:01 compute-0 sudo[210333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:02 compute-0 ceph-mon[75183]: pgmap v459: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:02 compute-0 python3.9[210335]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769678761.2319694-1391-267405821944723/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:26:02 compute-0 sudo[210333]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:02 compute-0 sudo[210485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcvjgtanaqjcwlwqphqidoftrcxlryjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678762.3116567-1406-262043523920910/AnsiballZ_stat.py'
Jan 29 09:26:02 compute-0 sudo[210485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v460: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:02 compute-0 python3.9[210487]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:26:02 compute-0 sudo[210485]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:03 compute-0 sudo[210608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oncgoxkzgtfqaqmzrlqgrfyxgaoglqcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678762.3116567-1406-262043523920910/AnsiballZ_copy.py'
Jan 29 09:26:03 compute-0 sudo[210608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:03 compute-0 python3.9[210610]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769678762.3116567-1406-262043523920910/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:26:03 compute-0 sudo[210608]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:03 compute-0 sudo[210760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrjsqdhduztffxeowukqfyspppkkrbju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678763.3436868-1421-189431033302700/AnsiballZ_stat.py'
Jan 29 09:26:03 compute-0 sudo[210760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:03 compute-0 python3.9[210762]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:26:03 compute-0 sudo[210760]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:04 compute-0 sudo[210883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oraugwonxowvsoyehjrdxdarxfcicnzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678763.3436868-1421-189431033302700/AnsiballZ_copy.py'
Jan 29 09:26:04 compute-0 sudo[210883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:04 compute-0 ceph-mon[75183]: pgmap v460: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:04 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:26:04 compute-0 python3.9[210885]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769678763.3436868-1421-189431033302700/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:26:04 compute-0 sudo[210883]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v461: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:04 compute-0 sudo[211035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmviyznenkqflprevqutfmfqgjegpefd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678764.4178145-1436-218317862522344/AnsiballZ_systemd.py'
Jan 29 09:26:04 compute-0 sudo[211035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:04 compute-0 python3.9[211037]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:26:04 compute-0 systemd[1]: Reloading.
Jan 29 09:26:05 compute-0 systemd-rc-local-generator[211066]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:26:05 compute-0 systemd-sysv-generator[211069]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:26:05 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 29 09:26:05 compute-0 sudo[211035]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:05 compute-0 sudo[211227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbgfflavbmtgitgvqdesoyefgjxcpker ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678765.501956-1444-28969201956692/AnsiballZ_systemd.py'
Jan 29 09:26:05 compute-0 sudo[211227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:06 compute-0 python3.9[211229]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 29 09:26:06 compute-0 systemd[1]: Reloading.
Jan 29 09:26:06 compute-0 ceph-mon[75183]: pgmap v461: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:06 compute-0 systemd-rc-local-generator[211259]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:26:06 compute-0 systemd-sysv-generator[211262]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:26:06 compute-0 systemd[1]: Reloading.
Jan 29 09:26:06 compute-0 systemd-sysv-generator[211296]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:26:06 compute-0 systemd-rc-local-generator[211292]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:26:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v462: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:06 compute-0 sudo[211227]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:07 compute-0 sshd-session[153023]: Connection closed by 192.168.122.30 port 44218
Jan 29 09:26:07 compute-0 podman[211327]: 2026-01-29 09:26:07.140539142 +0000 UTC m=+0.081027373 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 09:26:07 compute-0 sshd-session[153020]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:26:07 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Jan 29 09:26:07 compute-0 systemd[1]: session-48.scope: Consumed 3min 2.916s CPU time.
Jan 29 09:26:07 compute-0 systemd-logind[799]: Session 48 logged out. Waiting for processes to exit.
Jan 29 09:26:07 compute-0 systemd-logind[799]: Removed session 48.
Jan 29 09:26:08 compute-0 ceph-mon[75183]: pgmap v462: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v463: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:26:09.026 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:26:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:26:09.027 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:26:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:26:09.027 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:26:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:26:10 compute-0 ceph-mon[75183]: pgmap v463: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v464: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:11 compute-0 podman[211353]: 2026-01-29 09:26:11.111057734 +0000 UTC m=+0.052251047 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 29 09:26:12 compute-0 ceph-mon[75183]: pgmap v464: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v465: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:12 compute-0 sshd-session[211372]: Accepted publickey for zuul from 192.168.122.30 port 48244 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:26:12 compute-0 systemd-logind[799]: New session 49 of user zuul.
Jan 29 09:26:12 compute-0 systemd[1]: Started Session 49 of User zuul.
Jan 29 09:26:12 compute-0 sshd-session[211372]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:26:13 compute-0 python3.9[211525]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:26:14 compute-0 ceph-mon[75183]: pgmap v465: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:14 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:26:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v466: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:14 compute-0 python3.9[211679]: ansible-ansible.builtin.service_facts Invoked
Jan 29 09:26:14 compute-0 network[211696]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 29 09:26:14 compute-0 network[211697]: 'network-scripts' will be removed from distribution in near future.
Jan 29 09:26:14 compute-0 network[211698]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 29 09:26:16 compute-0 ceph-mon[75183]: pgmap v466: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v467: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:17 compute-0 sudo[211968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsodeawzxkzzhpwgabmczkzcbnnxglra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678777.3506553-42-186956912905438/AnsiballZ_setup.py'
Jan 29 09:26:17 compute-0 sudo[211968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:17 compute-0 python3.9[211970]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 09:26:18 compute-0 sudo[211968]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:18 compute-0 ceph-mon[75183]: pgmap v467: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v468: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:20 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:26:20 compute-0 ceph-mon[75183]: pgmap v468: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:20 compute-0 sudo[212052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvsmylhwhpguwfjojkfslyeeunsmejjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678777.3506553-42-186956912905438/AnsiballZ_dnf.py'
Jan 29 09:26:20 compute-0 sudo[212052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:20 compute-0 python3.9[212054]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:26:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v469: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:22 compute-0 ceph-mon[75183]: pgmap v469: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v470: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:24 compute-0 ceph-mon[75183]: pgmap v470: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:24 compute-0 sudo[212056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:26:24 compute-0 sudo[212056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:26:24 compute-0 sudo[212056]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:24 compute-0 sudo[212081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:26:24 compute-0 sudo[212081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:26:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v471: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:24 compute-0 sudo[212081]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:26:24 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:26:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:26:24 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:26:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:26:24 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:26:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:26:24 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:26:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:26:24 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:26:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:26:24 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:26:25 compute-0 sudo[212138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:26:25 compute-0 sudo[212138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:26:25 compute-0 sudo[212138]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:25 compute-0 sudo[212163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:26:25 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:26:25 compute-0 sudo[212163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:26:25 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:26:25 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:26:25 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:26:25 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:26:25 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:26:25 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:26:25 compute-0 podman[212200]: 2026-01-29 09:26:25.352198506 +0000 UTC m=+0.033738372 container create 52c37666455fafb92c26e073746776f6f78aed235b2ae56141d6505a9be75d81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_saha, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Jan 29 09:26:25 compute-0 systemd[1]: Started libpod-conmon-52c37666455fafb92c26e073746776f6f78aed235b2ae56141d6505a9be75d81.scope.
Jan 29 09:26:25 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:26:25 compute-0 podman[212200]: 2026-01-29 09:26:25.428355105 +0000 UTC m=+0.109895061 container init 52c37666455fafb92c26e073746776f6f78aed235b2ae56141d6505a9be75d81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 29 09:26:25 compute-0 podman[212200]: 2026-01-29 09:26:25.337573977 +0000 UTC m=+0.019113863 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:26:25 compute-0 podman[212200]: 2026-01-29 09:26:25.436034814 +0000 UTC m=+0.117574710 container start 52c37666455fafb92c26e073746776f6f78aed235b2ae56141d6505a9be75d81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_saha, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:26:25 compute-0 podman[212200]: 2026-01-29 09:26:25.439580991 +0000 UTC m=+0.121120947 container attach 52c37666455fafb92c26e073746776f6f78aed235b2ae56141d6505a9be75d81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:26:25 compute-0 jolly_saha[212217]: 167 167
Jan 29 09:26:25 compute-0 podman[212200]: 2026-01-29 09:26:25.442402738 +0000 UTC m=+0.123942604 container died 52c37666455fafb92c26e073746776f6f78aed235b2ae56141d6505a9be75d81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_saha, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 29 09:26:25 compute-0 systemd[1]: libpod-52c37666455fafb92c26e073746776f6f78aed235b2ae56141d6505a9be75d81.scope: Deactivated successfully.
Jan 29 09:26:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ae27994f5329415fae192b98b7ca8bc4efb527f8339eaa1e4037412e59f344e-merged.mount: Deactivated successfully.
Jan 29 09:26:25 compute-0 podman[212200]: 2026-01-29 09:26:25.484932249 +0000 UTC m=+0.166472115 container remove 52c37666455fafb92c26e073746776f6f78aed235b2ae56141d6505a9be75d81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_saha, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:26:25 compute-0 systemd[1]: libpod-conmon-52c37666455fafb92c26e073746776f6f78aed235b2ae56141d6505a9be75d81.scope: Deactivated successfully.
Jan 29 09:26:25 compute-0 podman[212241]: 2026-01-29 09:26:25.622198876 +0000 UTC m=+0.052366621 container create d15b58223daa758adcf10e7569f96a4ac73568d44bc6b7bf523f240c0de3432f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:26:25 compute-0 systemd[1]: Started libpod-conmon-d15b58223daa758adcf10e7569f96a4ac73568d44bc6b7bf523f240c0de3432f.scope.
Jan 29 09:26:25 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:26:25 compute-0 podman[212241]: 2026-01-29 09:26:25.601346187 +0000 UTC m=+0.031513942 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0724052e5e34763a600e515c8c00aa8718ff5dd383120714e8375646711c515/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0724052e5e34763a600e515c8c00aa8718ff5dd383120714e8375646711c515/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0724052e5e34763a600e515c8c00aa8718ff5dd383120714e8375646711c515/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0724052e5e34763a600e515c8c00aa8718ff5dd383120714e8375646711c515/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0724052e5e34763a600e515c8c00aa8718ff5dd383120714e8375646711c515/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:26:25 compute-0 podman[212241]: 2026-01-29 09:26:25.726901194 +0000 UTC m=+0.157068969 container init d15b58223daa758adcf10e7569f96a4ac73568d44bc6b7bf523f240c0de3432f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 09:26:25 compute-0 podman[212241]: 2026-01-29 09:26:25.737872623 +0000 UTC m=+0.168040368 container start d15b58223daa758adcf10e7569f96a4ac73568d44bc6b7bf523f240c0de3432f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:26:25 compute-0 podman[212241]: 2026-01-29 09:26:25.74178691 +0000 UTC m=+0.171954675 container attach d15b58223daa758adcf10e7569f96a4ac73568d44bc6b7bf523f240c0de3432f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:26:25 compute-0 sudo[212052]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:26 compute-0 ceph-mon[75183]: pgmap v471: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:26 compute-0 affectionate_engelbart[212258]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:26:26 compute-0 affectionate_engelbart[212258]: --> All data devices are unavailable
Jan 29 09:26:26 compute-0 systemd[1]: libpod-d15b58223daa758adcf10e7569f96a4ac73568d44bc6b7bf523f240c0de3432f.scope: Deactivated successfully.
Jan 29 09:26:26 compute-0 podman[212241]: 2026-01-29 09:26:26.171234131 +0000 UTC m=+0.601401876 container died d15b58223daa758adcf10e7569f96a4ac73568d44bc6b7bf523f240c0de3432f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_engelbart, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Jan 29 09:26:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0724052e5e34763a600e515c8c00aa8718ff5dd383120714e8375646711c515-merged.mount: Deactivated successfully.
Jan 29 09:26:26 compute-0 podman[212241]: 2026-01-29 09:26:26.222580073 +0000 UTC m=+0.652747818 container remove d15b58223daa758adcf10e7569f96a4ac73568d44bc6b7bf523f240c0de3432f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_engelbart, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 09:26:26 compute-0 systemd[1]: libpod-conmon-d15b58223daa758adcf10e7569f96a4ac73568d44bc6b7bf523f240c0de3432f.scope: Deactivated successfully.
Jan 29 09:26:26 compute-0 sudo[212163]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:26 compute-0 sudo[212365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:26:26 compute-0 sudo[212365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:26:26 compute-0 sudo[212365]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:26 compute-0 sudo[212396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:26:26 compute-0 sudo[212396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:26:26 compute-0 sudo[212488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sayyrznlpcwwxfzeqpcghxvpsnznrurf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678786.0836117-54-221661522403154/AnsiballZ_stat.py'
Jan 29 09:26:26 compute-0 sudo[212488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:26:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:26:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:26:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:26:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:26:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:26:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v472: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:26 compute-0 podman[212504]: 2026-01-29 09:26:26.673880691 +0000 UTC m=+0.044969828 container create 51a5167b040dcfb196de7bbac23542c780fb83e6db2eae35c167114ac5af0882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_snyder, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:26:26 compute-0 python3.9[212490]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:26:26 compute-0 systemd[1]: Started libpod-conmon-51a5167b040dcfb196de7bbac23542c780fb83e6db2eae35c167114ac5af0882.scope.
Jan 29 09:26:26 compute-0 sudo[212488]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:26 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:26:26 compute-0 podman[212504]: 2026-01-29 09:26:26.659597261 +0000 UTC m=+0.030686418 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:26:26 compute-0 podman[212504]: 2026-01-29 09:26:26.758398468 +0000 UTC m=+0.129487625 container init 51a5167b040dcfb196de7bbac23542c780fb83e6db2eae35c167114ac5af0882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_snyder, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 09:26:26 compute-0 podman[212504]: 2026-01-29 09:26:26.765467151 +0000 UTC m=+0.136556288 container start 51a5167b040dcfb196de7bbac23542c780fb83e6db2eae35c167114ac5af0882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:26:26 compute-0 podman[212504]: 2026-01-29 09:26:26.76948428 +0000 UTC m=+0.140573437 container attach 51a5167b040dcfb196de7bbac23542c780fb83e6db2eae35c167114ac5af0882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_snyder, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:26:26 compute-0 modest_snyder[212521]: 167 167
Jan 29 09:26:26 compute-0 systemd[1]: libpod-51a5167b040dcfb196de7bbac23542c780fb83e6db2eae35c167114ac5af0882.scope: Deactivated successfully.
Jan 29 09:26:26 compute-0 podman[212504]: 2026-01-29 09:26:26.772812941 +0000 UTC m=+0.143902078 container died 51a5167b040dcfb196de7bbac23542c780fb83e6db2eae35c167114ac5af0882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:26:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d061f37894c169504c72280632bb2cf975f0b0d48947cbf0869bdef73efe4e4d-merged.mount: Deactivated successfully.
Jan 29 09:26:26 compute-0 podman[212504]: 2026-01-29 09:26:26.818743985 +0000 UTC m=+0.189833122 container remove 51a5167b040dcfb196de7bbac23542c780fb83e6db2eae35c167114ac5af0882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 09:26:26 compute-0 systemd[1]: libpod-conmon-51a5167b040dcfb196de7bbac23542c780fb83e6db2eae35c167114ac5af0882.scope: Deactivated successfully.
Jan 29 09:26:26 compute-0 podman[212570]: 2026-01-29 09:26:26.932015087 +0000 UTC m=+0.038265996 container create 49301e058cbe763f25224519bce1a64a14acf916c6e39f152066fea8ce8908a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_haibt, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 09:26:26 compute-0 systemd[1]: Started libpod-conmon-49301e058cbe763f25224519bce1a64a14acf916c6e39f152066fea8ce8908a8.scope.
Jan 29 09:26:27 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:26:27 compute-0 podman[212570]: 2026-01-29 09:26:26.914833218 +0000 UTC m=+0.021084157 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df1b3044b95f3cb0460c56361af1a24d1d62ad777babe536c9cf752fbb245b9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df1b3044b95f3cb0460c56361af1a24d1d62ad777babe536c9cf752fbb245b9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df1b3044b95f3cb0460c56361af1a24d1d62ad777babe536c9cf752fbb245b9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df1b3044b95f3cb0460c56361af1a24d1d62ad777babe536c9cf752fbb245b9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:26:27 compute-0 podman[212570]: 2026-01-29 09:26:27.040848327 +0000 UTC m=+0.147099266 container init 49301e058cbe763f25224519bce1a64a14acf916c6e39f152066fea8ce8908a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:26:27 compute-0 podman[212570]: 2026-01-29 09:26:27.048484326 +0000 UTC m=+0.154735245 container start 49301e058cbe763f25224519bce1a64a14acf916c6e39f152066fea8ce8908a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:26:27 compute-0 podman[212570]: 2026-01-29 09:26:27.058348045 +0000 UTC m=+0.164598984 container attach 49301e058cbe763f25224519bce1a64a14acf916c6e39f152066fea8ce8908a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_haibt, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 29 09:26:27 compute-0 sudo[212718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnwwyifgudpzptkdbyrjxbpcdowfjbdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678786.9099035-64-3596709736878/AnsiballZ_command.py'
Jan 29 09:26:27 compute-0 sudo[212718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:27 compute-0 kind_haibt[212638]: {
Jan 29 09:26:27 compute-0 kind_haibt[212638]:     "0": [
Jan 29 09:26:27 compute-0 kind_haibt[212638]:         {
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "devices": [
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "/dev/loop3"
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             ],
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "lv_name": "ceph_lv0",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "lv_size": "21470642176",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "name": "ceph_lv0",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "tags": {
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.cluster_name": "ceph",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.crush_device_class": "",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.encrypted": "0",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.objectstore": "bluestore",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.osd_id": "0",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.type": "block",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.vdo": "0",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.with_tpm": "0"
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             },
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "type": "block",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "vg_name": "ceph_vg0"
Jan 29 09:26:27 compute-0 kind_haibt[212638]:         }
Jan 29 09:26:27 compute-0 kind_haibt[212638]:     ],
Jan 29 09:26:27 compute-0 kind_haibt[212638]:     "1": [
Jan 29 09:26:27 compute-0 kind_haibt[212638]:         {
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "devices": [
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "/dev/loop4"
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             ],
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "lv_name": "ceph_lv1",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "lv_size": "21470642176",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "name": "ceph_lv1",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "tags": {
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.cluster_name": "ceph",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.crush_device_class": "",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.encrypted": "0",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.objectstore": "bluestore",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.osd_id": "1",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.type": "block",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.vdo": "0",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.with_tpm": "0"
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             },
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "type": "block",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "vg_name": "ceph_vg1"
Jan 29 09:26:27 compute-0 kind_haibt[212638]:         }
Jan 29 09:26:27 compute-0 kind_haibt[212638]:     ],
Jan 29 09:26:27 compute-0 kind_haibt[212638]:     "2": [
Jan 29 09:26:27 compute-0 kind_haibt[212638]:         {
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "devices": [
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "/dev/loop5"
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             ],
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "lv_name": "ceph_lv2",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "lv_size": "21470642176",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "name": "ceph_lv2",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "tags": {
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.cluster_name": "ceph",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.crush_device_class": "",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.encrypted": "0",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.objectstore": "bluestore",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.osd_id": "2",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.type": "block",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.vdo": "0",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:                 "ceph.with_tpm": "0"
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             },
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "type": "block",
Jan 29 09:26:27 compute-0 kind_haibt[212638]:             "vg_name": "ceph_vg2"
Jan 29 09:26:27 compute-0 kind_haibt[212638]:         }
Jan 29 09:26:27 compute-0 kind_haibt[212638]:     ]
Jan 29 09:26:27 compute-0 kind_haibt[212638]: }
Jan 29 09:26:27 compute-0 systemd[1]: libpod-49301e058cbe763f25224519bce1a64a14acf916c6e39f152066fea8ce8908a8.scope: Deactivated successfully.
Jan 29 09:26:27 compute-0 podman[212570]: 2026-01-29 09:26:27.363967367 +0000 UTC m=+0.470218276 container died 49301e058cbe763f25224519bce1a64a14acf916c6e39f152066fea8ce8908a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_haibt, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:26:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-df1b3044b95f3cb0460c56361af1a24d1d62ad777babe536c9cf752fbb245b9f-merged.mount: Deactivated successfully.
Jan 29 09:26:27 compute-0 podman[212570]: 2026-01-29 09:26:27.418384092 +0000 UTC m=+0.524634991 container remove 49301e058cbe763f25224519bce1a64a14acf916c6e39f152066fea8ce8908a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_haibt, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 29 09:26:27 compute-0 systemd[1]: libpod-conmon-49301e058cbe763f25224519bce1a64a14acf916c6e39f152066fea8ce8908a8.scope: Deactivated successfully.
Jan 29 09:26:27 compute-0 sudo[212396]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:27 compute-0 python3.9[212722]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:26:27 compute-0 sudo[212718]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:27 compute-0 sudo[212737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:26:27 compute-0 sudo[212737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:26:27 compute-0 sudo[212737]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:27 compute-0 sudo[212762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:26:27 compute-0 sudo[212762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:26:27 compute-0 podman[212868]: 2026-01-29 09:26:27.817531926 +0000 UTC m=+0.043838477 container create cf1729d684e8d033a7a5e71b85fb1a518afd378121c2c4096cd937b8151f43c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bohr, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:26:27 compute-0 systemd[1]: Started libpod-conmon-cf1729d684e8d033a7a5e71b85fb1a518afd378121c2c4096cd937b8151f43c0.scope.
Jan 29 09:26:27 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:26:27 compute-0 podman[212868]: 2026-01-29 09:26:27.893361096 +0000 UTC m=+0.119667727 container init cf1729d684e8d033a7a5e71b85fb1a518afd378121c2c4096cd937b8151f43c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bohr, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:26:27 compute-0 podman[212868]: 2026-01-29 09:26:27.797648234 +0000 UTC m=+0.023954805 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:26:27 compute-0 podman[212868]: 2026-01-29 09:26:27.903902544 +0000 UTC m=+0.130209105 container start cf1729d684e8d033a7a5e71b85fb1a518afd378121c2c4096cd937b8151f43c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 29 09:26:27 compute-0 great_bohr[212913]: 167 167
Jan 29 09:26:27 compute-0 systemd[1]: libpod-cf1729d684e8d033a7a5e71b85fb1a518afd378121c2c4096cd937b8151f43c0.scope: Deactivated successfully.
Jan 29 09:26:27 compute-0 podman[212868]: 2026-01-29 09:26:27.910454373 +0000 UTC m=+0.136760964 container attach cf1729d684e8d033a7a5e71b85fb1a518afd378121c2c4096cd937b8151f43c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bohr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:26:27 compute-0 podman[212868]: 2026-01-29 09:26:27.910994267 +0000 UTC m=+0.137300858 container died cf1729d684e8d033a7a5e71b85fb1a518afd378121c2c4096cd937b8151f43c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bohr, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 29 09:26:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5bf0fa988849ba404657f12143685b11fc71c229df5d71af6553ff33f380b74-merged.mount: Deactivated successfully.
Jan 29 09:26:27 compute-0 podman[212868]: 2026-01-29 09:26:27.957048044 +0000 UTC m=+0.183354615 container remove cf1729d684e8d033a7a5e71b85fb1a518afd378121c2c4096cd937b8151f43c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bohr, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:26:27 compute-0 systemd[1]: libpod-conmon-cf1729d684e8d033a7a5e71b85fb1a518afd378121c2c4096cd937b8151f43c0.scope: Deactivated successfully.
Jan 29 09:26:27 compute-0 sudo[212981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usyjplzneyxfqjrvredoxwsatdimqhyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678787.7103043-74-254959171751262/AnsiballZ_stat.py'
Jan 29 09:26:27 compute-0 sudo[212981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:28 compute-0 podman[212989]: 2026-01-29 09:26:28.085913812 +0000 UTC m=+0.039591562 container create c5ec3bdd94a26f5f47fe78718a78036714cfb6bcd9f8ce07cf77fdb4f9fc1721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_albattani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 09:26:28 compute-0 systemd[1]: Started libpod-conmon-c5ec3bdd94a26f5f47fe78718a78036714cfb6bcd9f8ce07cf77fdb4f9fc1721.scope.
Jan 29 09:26:28 compute-0 ceph-mon[75183]: pgmap v472: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:28 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:26:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0881d556e17656060cba53a62b27650ea378f6d4eef49e45fd912d2c68e5f2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:26:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0881d556e17656060cba53a62b27650ea378f6d4eef49e45fd912d2c68e5f2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:26:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0881d556e17656060cba53a62b27650ea378f6d4eef49e45fd912d2c68e5f2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:26:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0881d556e17656060cba53a62b27650ea378f6d4eef49e45fd912d2c68e5f2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:26:28 compute-0 podman[212989]: 2026-01-29 09:26:28.065807193 +0000 UTC m=+0.019484973 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:26:28 compute-0 podman[212989]: 2026-01-29 09:26:28.180448892 +0000 UTC m=+0.134126692 container init c5ec3bdd94a26f5f47fe78718a78036714cfb6bcd9f8ce07cf77fdb4f9fc1721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Jan 29 09:26:28 compute-0 python3.9[212983]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:26:28 compute-0 podman[212989]: 2026-01-29 09:26:28.187539606 +0000 UTC m=+0.141217366 container start c5ec3bdd94a26f5f47fe78718a78036714cfb6bcd9f8ce07cf77fdb4f9fc1721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_albattani, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 29 09:26:28 compute-0 podman[212989]: 2026-01-29 09:26:28.192978394 +0000 UTC m=+0.146656234 container attach c5ec3bdd94a26f5f47fe78718a78036714cfb6bcd9f8ce07cf77fdb4f9fc1721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_albattani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:26:28 compute-0 sudo[212981]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:28 compute-0 sudo[213196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnrpprfaugyumqwxegprxwxnxqkadgdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678788.340694-82-108756012347702/AnsiballZ_command.py'
Jan 29 09:26:28 compute-0 sudo[213196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v473: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:28 compute-0 python3.9[213204]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:26:28 compute-0 lvm[213235]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:26:28 compute-0 lvm[213235]: VG ceph_vg0 finished
Jan 29 09:26:28 compute-0 lvm[213238]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:26:28 compute-0 lvm[213238]: VG ceph_vg1 finished
Jan 29 09:26:28 compute-0 sudo[213196]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:28 compute-0 lvm[213240]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:26:28 compute-0 lvm[213240]: VG ceph_vg2 finished
Jan 29 09:26:28 compute-0 adoring_albattani[213005]: {}
Jan 29 09:26:28 compute-0 systemd[1]: libpod-c5ec3bdd94a26f5f47fe78718a78036714cfb6bcd9f8ce07cf77fdb4f9fc1721.scope: Deactivated successfully.
Jan 29 09:26:28 compute-0 podman[212989]: 2026-01-29 09:26:28.954225421 +0000 UTC m=+0.907903171 container died c5ec3bdd94a26f5f47fe78718a78036714cfb6bcd9f8ce07cf77fdb4f9fc1721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_albattani, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 09:26:28 compute-0 systemd[1]: libpod-c5ec3bdd94a26f5f47fe78718a78036714cfb6bcd9f8ce07cf77fdb4f9fc1721.scope: Consumed 1.120s CPU time.
Jan 29 09:26:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0881d556e17656060cba53a62b27650ea378f6d4eef49e45fd912d2c68e5f2d-merged.mount: Deactivated successfully.
Jan 29 09:26:29 compute-0 podman[212989]: 2026-01-29 09:26:29.131959002 +0000 UTC m=+1.085636742 container remove c5ec3bdd94a26f5f47fe78718a78036714cfb6bcd9f8ce07cf77fdb4f9fc1721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_albattani, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:26:29 compute-0 systemd[1]: libpod-conmon-c5ec3bdd94a26f5f47fe78718a78036714cfb6bcd9f8ce07cf77fdb4f9fc1721.scope: Deactivated successfully.
Jan 29 09:26:29 compute-0 sudo[213406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teauressxdxrwtfwrnharswcwiwdesyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678788.91911-90-255754672910985/AnsiballZ_stat.py'
Jan 29 09:26:29 compute-0 sudo[213406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:29 compute-0 sudo[212762]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:26:29 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:26:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:26:29 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:26:29 compute-0 sudo[213409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:26:29 compute-0 sudo[213409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:26:29 compute-0 sudo[213409]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:29 compute-0 python3.9[213408]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:26:29 compute-0 sudo[213406]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:29 compute-0 sudo[213554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yesgraaqpplcgizjfakibdwfaibqvhtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678788.91911-90-255754672910985/AnsiballZ_copy.py'
Jan 29 09:26:29 compute-0 sudo[213554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:30 compute-0 python3.9[213556]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769678788.91911-90-255754672910985/.source.iscsi _original_basename=.v10w451g follow=False checksum=e11228b00605f041cc6f9565b67eae346d433d43 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:26:30 compute-0 sudo[213554]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:30 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:26:30 compute-0 ceph-mon[75183]: pgmap v473: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:30 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:26:30 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:26:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v474: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:30 compute-0 sudo[213706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnkhphldhtrowgmtwjjkuaksyslxhrbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678790.2049212-105-132290666701789/AnsiballZ_file.py'
Jan 29 09:26:30 compute-0 sudo[213706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:30 compute-0 python3.9[213708]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:26:30 compute-0 sudo[213706]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:31 compute-0 sudo[213858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqqhsodrmdqskhkbfuupmputovuxlspl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678790.9444344-113-106456434097485/AnsiballZ_lineinfile.py'
Jan 29 09:26:31 compute-0 sudo[213858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:31 compute-0 ceph-mon[75183]: pgmap v474: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:31 compute-0 python3.9[213860]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:26:31 compute-0 sudo[213858]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:32 compute-0 sudo[214010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptywiqogiypshznsyocsvhxzcfpppvvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678791.742956-122-16393017236421/AnsiballZ_systemd_service.py'
Jan 29 09:26:32 compute-0 sudo[214010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v475: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:32 compute-0 python3.9[214012]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:26:32 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 29 09:26:32 compute-0 sudo[214010]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:33 compute-0 sudo[214166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrvwckxduahbsysybpvlncblmlcqnvwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678792.827958-130-177991337342825/AnsiballZ_systemd_service.py'
Jan 29 09:26:33 compute-0 sudo[214166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:33 compute-0 python3.9[214168]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:26:33 compute-0 systemd[1]: Reloading.
Jan 29 09:26:33 compute-0 systemd-rc-local-generator[214194]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:26:33 compute-0 systemd-sysv-generator[214199]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:26:33 compute-0 ceph-mon[75183]: pgmap v475: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:33 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 29 09:26:33 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 29 09:26:33 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 29 09:26:33 compute-0 systemd[1]: Started Open-iSCSI.
Jan 29 09:26:33 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 29 09:26:33 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 29 09:26:33 compute-0 sudo[214166]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v476: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:34 compute-0 python3.9[214367]: ansible-ansible.builtin.service_facts Invoked
Jan 29 09:26:34 compute-0 network[214384]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 29 09:26:34 compute-0 network[214385]: 'network-scripts' will be removed from distribution in near future.
Jan 29 09:26:34 compute-0 network[214386]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 29 09:26:35 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:26:35 compute-0 ceph-mon[75183]: pgmap v476: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v477: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:37 compute-0 ceph-mon[75183]: pgmap v477: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:37 compute-0 sudo[214668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohpnukgfvweaepqfiiaqltelyngoocic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678797.5462375-153-28885698221562/AnsiballZ_dnf.py'
Jan 29 09:26:37 compute-0 sudo[214668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:37 compute-0 podman[214630]: 2026-01-29 09:26:37.857589312 +0000 UTC m=+0.080523479 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 29 09:26:38 compute-0 python3.9[214678]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:26:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v478: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:39 compute-0 ceph-mon[75183]: pgmap v478: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:40 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:26:40 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 29 09:26:40 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 29 09:26:40 compute-0 systemd[1]: Reloading.
Jan 29 09:26:40 compute-0 systemd-rc-local-generator[214729]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:26:40 compute-0 systemd-sysv-generator[214735]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:26:40 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 29 09:26:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v479: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:40 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 29 09:26:40 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 29 09:26:40 compute-0 systemd[1]: run-ra0cf611b3e06406aa9496f418a1f21fd.service: Deactivated successfully.
Jan 29 09:26:40 compute-0 sudo[214668]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:41 compute-0 sudo[215009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kinricqtvuriulpjevexpvyfzaykqzgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678801.1867187-162-270023216169078/AnsiballZ_file.py'
Jan 29 09:26:41 compute-0 sudo[215009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:41 compute-0 podman[214972]: 2026-01-29 09:26:41.501174851 +0000 UTC m=+0.070870875 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 29 09:26:41 compute-0 python3.9[215013]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 29 09:26:41 compute-0 sudo[215009]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:41 compute-0 ceph-mon[75183]: pgmap v479: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:42 compute-0 sudo[215169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vezgevupadxaulpdwzacwvdcrgqorhxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678801.8750305-170-151203585822975/AnsiballZ_modprobe.py'
Jan 29 09:26:42 compute-0 sudo[215169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:42 compute-0 python3.9[215171]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 29 09:26:42 compute-0 sudo[215169]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v480: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:42 compute-0 sudo[215325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eccxusinkjktdteuzetxxspkgsartrrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678802.6269445-178-211462635119528/AnsiballZ_stat.py'
Jan 29 09:26:42 compute-0 sudo[215325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:43 compute-0 python3.9[215327]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:26:43 compute-0 sudo[215325]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:43 compute-0 sudo[215448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgfwzjbibuzaiubiqxeeotcbpnutxwzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678802.6269445-178-211462635119528/AnsiballZ_copy.py'
Jan 29 09:26:43 compute-0 sudo[215448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:43 compute-0 python3.9[215450]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769678802.6269445-178-211462635119528/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:26:43 compute-0 sudo[215448]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:43 compute-0 ceph-mon[75183]: pgmap v480: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:44 compute-0 sudo[215600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prbfxlzowevltfhsmtgciwxdkmavpzje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678803.8757732-194-24089177821574/AnsiballZ_lineinfile.py'
Jan 29 09:26:44 compute-0 sudo[215600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:44 compute-0 python3.9[215602]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:26:44 compute-0 sudo[215600]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v481: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:26:45 compute-0 sudo[215752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxpvxifaymwlphmikwmxnkivaayghywj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678804.4806845-202-243038605619497/AnsiballZ_systemd.py'
Jan 29 09:26:45 compute-0 sudo[215752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:45 compute-0 python3.9[215754]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 09:26:45 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 29 09:26:45 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 29 09:26:45 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 29 09:26:45 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 29 09:26:45 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 29 09:26:45 compute-0 sudo[215752]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:45 compute-0 ceph-mon[75183]: pgmap v481: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:45 compute-0 sudo[215908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euabthrwfqguhatxkeauzhbjpmlomtvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678805.6321018-210-151566249717469/AnsiballZ_command.py'
Jan 29 09:26:45 compute-0 sudo[215908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:46 compute-0 python3.9[215910]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:26:46 compute-0 sudo[215908]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:46 compute-0 sudo[216061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtlwyfunqjgccpjwzhyxpoeizjzfpicv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678806.2993114-220-250146494244645/AnsiballZ_stat.py'
Jan 29 09:26:46 compute-0 sudo[216061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v482: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:46 compute-0 python3.9[216063]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:26:46 compute-0 sudo[216061]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:47 compute-0 sudo[216213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqkxmuzjrmfcinztfkgczqwlnunquqxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678806.8902524-229-112679744289530/AnsiballZ_stat.py'
Jan 29 09:26:47 compute-0 sudo[216213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:47 compute-0 python3.9[216215]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:26:47 compute-0 sudo[216213]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:47 compute-0 sudo[216336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-askzebxztifjvxwmcswrzkjzqwyjhkdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678806.8902524-229-112679744289530/AnsiballZ_copy.py'
Jan 29 09:26:47 compute-0 sudo[216336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:47 compute-0 ceph-mon[75183]: pgmap v482: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:47 compute-0 python3.9[216338]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769678806.8902524-229-112679744289530/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:26:47 compute-0 sudo[216336]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:48 compute-0 sudo[216488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmioguxcgyfzdfmqctmzguoireckqpzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678807.9783227-244-215699434773267/AnsiballZ_command.py'
Jan 29 09:26:48 compute-0 sudo[216488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:48 compute-0 python3.9[216490]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:26:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v483: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:48 compute-0 sudo[216488]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:48 compute-0 sudo[216641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qudsslqtoqvfllmfsglnodbvdcaquniq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678808.770405-252-237282807773533/AnsiballZ_lineinfile.py'
Jan 29 09:26:48 compute-0 sudo[216641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:49 compute-0 python3.9[216643]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:26:49 compute-0 sudo[216641]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:49 compute-0 sudo[216793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgyjtmswdwlybcoavxpstjpapmqbacpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678809.366229-260-239949238687204/AnsiballZ_replace.py'
Jan 29 09:26:49 compute-0 sudo[216793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:50 compute-0 python3.9[216795]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:26:50 compute-0 sudo[216793]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:50 compute-0 sudo[216945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljakqkultnqcdggpvdhvftlwedkykcau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678810.2366152-268-209748280836085/AnsiballZ_replace.py'
Jan 29 09:26:50 compute-0 sudo[216945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v484: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:50 compute-0 python3.9[216947]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:26:50 compute-0 sudo[216945]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:51 compute-0 sudo[217097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vohrnrirkeprdhitirmxhtlekmnpijvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678810.8779187-277-64375665678934/AnsiballZ_lineinfile.py'
Jan 29 09:26:51 compute-0 sudo[217097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:26:51 compute-0 ceph-mon[75183]: pgmap v483: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:51 compute-0 python3.9[217099]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:26:51 compute-0 sudo[217097]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:51 compute-0 sudo[217249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwijdwplfloekmutaqjgdtmthwkqilyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678811.4918962-277-274334066412880/AnsiballZ_lineinfile.py'
Jan 29 09:26:51 compute-0 sudo[217249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:51 compute-0 python3.9[217251]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:26:51 compute-0 sudo[217249]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:52 compute-0 sudo[217401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvlflsmvbegvthwvbzdeqtubychvuygn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678812.0436966-277-201510935949824/AnsiballZ_lineinfile.py'
Jan 29 09:26:52 compute-0 sudo[217401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:52 compute-0 ceph-mon[75183]: pgmap v484: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:52 compute-0 python3.9[217403]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:26:52 compute-0 sudo[217401]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v485: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:52 compute-0 sudo[217553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmecdoavnvyctjbjbxdhdcbevfaasewu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678812.607349-277-174597162340927/AnsiballZ_lineinfile.py'
Jan 29 09:26:52 compute-0 sudo[217553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:53 compute-0 python3.9[217555]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:26:53 compute-0 sudo[217553]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:53 compute-0 sudo[217705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olfruaoeglufxzvortcxcdluiroztden ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678813.1756172-306-78876123320278/AnsiballZ_stat.py'
Jan 29 09:26:53 compute-0 sudo[217705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:53 compute-0 python3.9[217707]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:26:53 compute-0 sudo[217705]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:54 compute-0 sudo[217859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swimzusdxosfepdmfnjoawlpvvkrqcnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678813.7908971-314-14917506162741/AnsiballZ_command.py'
Jan 29 09:26:54 compute-0 sudo[217859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:54 compute-0 python3.9[217861]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:26:54 compute-0 sudo[217859]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:54 compute-0 ceph-mon[75183]: pgmap v485: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v486: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:54 compute-0 sudo[218012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulstalfzkbnuafjsejchrrodojkhgnmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678814.4085152-323-26180078445102/AnsiballZ_systemd_service.py'
Jan 29 09:26:54 compute-0 sudo[218012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:54 compute-0 python3.9[218014]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:26:55 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 29 09:26:55 compute-0 sudo[218012]: pam_unix(sudo:session): session closed for user root
Jan 29 09:26:55 compute-0 sudo[218168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puflpwbnuvgseiuizltwdsfqjijxsptn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678815.1712244-331-38222661132841/AnsiballZ_systemd_service.py'
Jan 29 09:26:55 compute-0 sudo[218168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:26:55 compute-0 python3.9[218170]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:26:55 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 29 09:26:55 compute-0 udevadm[218175]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 29 09:26:55 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 29 09:26:55 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 29 09:26:55 compute-0 multipathd[218178]: --------start up--------
Jan 29 09:26:55 compute-0 multipathd[218178]: read /etc/multipath.conf
Jan 29 09:26:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:26:55
Jan 29 09:26:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:26:55 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:26:55 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'vms', 'cephfs.cephfs.data', 'backups']
Jan 29 09:26:55 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:26:55 compute-0 multipathd[218178]: path checkers start up
Jan 29 09:26:56 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:26:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:26:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:26:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:26:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:26:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:26:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:26:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v487: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:26:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:26:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:26:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:26:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:26:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:26:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:26:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:26:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:26:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:26:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:26:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v488: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v489: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:01 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:27:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:27:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:27:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:27:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:27:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:27:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:27:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:27:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:27:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:27:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:27:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:27:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:27:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0578630957479565e-06 of space, bias 4.0, pg target 0.0012694357148975478 quantized to 16 (current 32)
Jan 29 09:27:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:27:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:27:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v490: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:02 compute-0 ceph-mon[75183]: pgmap v486: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:02 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 29 09:27:02 compute-0 sudo[218168]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:03 compute-0 sudo[218335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywuleqatuqbpoprgfdwtaitofbauifha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678823.0052354-343-180743418512640/AnsiballZ_file.py'
Jan 29 09:27:03 compute-0 sudo[218335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:03 compute-0 python3.9[218337]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 29 09:27:03 compute-0 sudo[218335]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:03 compute-0 ceph-mon[75183]: pgmap v487: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:03 compute-0 ceph-mon[75183]: pgmap v488: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:03 compute-0 ceph-mon[75183]: pgmap v489: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:03 compute-0 ceph-mon[75183]: pgmap v490: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:03 compute-0 sudo[218487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqlvzbkuncygyevbxgkanpchaakhtsjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678823.5789874-351-84677812897107/AnsiballZ_modprobe.py'
Jan 29 09:27:03 compute-0 sudo[218487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:04 compute-0 python3.9[218489]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 29 09:27:04 compute-0 kernel: Key type psk registered
Jan 29 09:27:04 compute-0 sudo[218487]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:04 compute-0 sudo[218650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjoybkfzkjzqxcwrpptapybgesisuuuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678824.3490734-359-138464201250660/AnsiballZ_stat.py'
Jan 29 09:27:04 compute-0 sudo[218650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v491: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:04 compute-0 python3.9[218652]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:27:04 compute-0 sudo[218650]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:05 compute-0 sudo[218773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfdsewrazqyiljqaegterbpffcekwtpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678824.3490734-359-138464201250660/AnsiballZ_copy.py'
Jan 29 09:27:05 compute-0 sudo[218773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:05 compute-0 python3.9[218775]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769678824.3490734-359-138464201250660/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:27:05 compute-0 sudo[218773]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:05 compute-0 sudo[218925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puvhzhsbzurjiijsnvimgezfcpygybiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678825.4717922-375-216491841560228/AnsiballZ_lineinfile.py'
Jan 29 09:27:05 compute-0 sudo[218925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:05 compute-0 ceph-mon[75183]: pgmap v491: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:05 compute-0 python3.9[218927]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:27:05 compute-0 sudo[218925]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:27:06 compute-0 sudo[219077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffrcnklwlxcfdxugttulzmclfyqfvqsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678826.0510292-383-104555911733565/AnsiballZ_systemd.py'
Jan 29 09:27:06 compute-0 sudo[219077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:06 compute-0 python3.9[219079]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 09:27:06 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 29 09:27:06 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 29 09:27:06 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 29 09:27:06 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 29 09:27:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v492: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:06 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 29 09:27:06 compute-0 sudo[219077]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:07 compute-0 sudo[219233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjzpdcavtvfngricaikclmtkefxlcwoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678826.8250976-391-65633262016048/AnsiballZ_dnf.py'
Jan 29 09:27:07 compute-0 sudo[219233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:07 compute-0 python3.9[219235]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 09:27:07 compute-0 ceph-mon[75183]: pgmap v492: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:08 compute-0 podman[219237]: 2026-01-29 09:27:08.155666218 +0000 UTC m=+0.096161619 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 29 09:27:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v493: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:27:09.028 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:27:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:27:09.028 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:27:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:27:09.028 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:27:09 compute-0 ceph-mon[75183]: pgmap v493: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:09 compute-0 systemd[1]: Reloading.
Jan 29 09:27:10 compute-0 systemd-rc-local-generator[219291]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:27:10 compute-0 systemd-sysv-generator[219295]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:27:10 compute-0 systemd[1]: Reloading.
Jan 29 09:27:10 compute-0 systemd-rc-local-generator[219328]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:27:10 compute-0 systemd-sysv-generator[219332]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:27:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v494: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:10 compute-0 systemd-logind[799]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 29 09:27:10 compute-0 systemd-logind[799]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 29 09:27:10 compute-0 lvm[219376]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:27:10 compute-0 lvm[219376]: VG ceph_vg1 finished
Jan 29 09:27:10 compute-0 lvm[219375]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:27:10 compute-0 lvm[219375]: VG ceph_vg2 finished
Jan 29 09:27:10 compute-0 lvm[219379]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:27:10 compute-0 lvm[219379]: VG ceph_vg0 finished
Jan 29 09:27:10 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 29 09:27:10 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 29 09:27:10 compute-0 systemd[1]: Reloading.
Jan 29 09:27:10 compute-0 systemd-sysv-generator[219429]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:27:10 compute-0 systemd-rc-local-generator[219426]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:27:11 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 29 09:27:11 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:27:11 compute-0 ceph-mon[75183]: pgmap v494: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:11 compute-0 sudo[219233]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:12 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 29 09:27:12 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 29 09:27:12 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.276s CPU time.
Jan 29 09:27:12 compute-0 systemd[1]: run-r8cc6157e894d431c965836528e9d27d1.service: Deactivated successfully.
Jan 29 09:27:12 compute-0 podman[220630]: 2026-01-29 09:27:12.121033244 +0000 UTC m=+0.054700340 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 29 09:27:12 compute-0 sudo[220752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfqgrxeuaiyaoczlyefqramzxtylxqcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678832.0155416-399-19127184882765/AnsiballZ_systemd_service.py'
Jan 29 09:27:12 compute-0 sudo[220752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:12 compute-0 python3.9[220754]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 09:27:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v495: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:12 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 29 09:27:12 compute-0 iscsid[214207]: iscsid shutting down.
Jan 29 09:27:12 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 29 09:27:12 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 29 09:27:12 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 29 09:27:12 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 29 09:27:12 compute-0 systemd[1]: Started Open-iSCSI.
Jan 29 09:27:12 compute-0 sudo[220752]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:13 compute-0 sudo[220908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qppnteeexbzdbircfnkisncktlpqqhlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678832.8132553-407-243382622117404/AnsiballZ_systemd_service.py'
Jan 29 09:27:13 compute-0 sudo[220908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:13 compute-0 python3.9[220910]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 09:27:13 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 29 09:27:13 compute-0 multipathd[218178]: exit (signal)
Jan 29 09:27:13 compute-0 multipathd[218178]: --------shut down-------
Jan 29 09:27:13 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 29 09:27:13 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 29 09:27:13 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 29 09:27:13 compute-0 multipathd[220916]: --------start up--------
Jan 29 09:27:13 compute-0 multipathd[220916]: read /etc/multipath.conf
Jan 29 09:27:13 compute-0 multipathd[220916]: path checkers start up
Jan 29 09:27:13 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 29 09:27:13 compute-0 sudo[220908]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:13 compute-0 ceph-mon[75183]: pgmap v495: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:14 compute-0 python3.9[221073]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 09:27:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v496: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:14 compute-0 sudo[221227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szrpeywspclofumnjrznmbookzxxqaoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678834.684684-425-136645225987676/AnsiballZ_file.py'
Jan 29 09:27:14 compute-0 sudo[221227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:15 compute-0 python3.9[221229]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:27:15 compute-0 sudo[221227]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:15 compute-0 sudo[221379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecxcaelldpbdvxhlocfsnzcchqpirdtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678835.442064-436-181338493934809/AnsiballZ_systemd_service.py'
Jan 29 09:27:15 compute-0 sudo[221379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:15 compute-0 ceph-mon[75183]: pgmap v496: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:16 compute-0 python3.9[221381]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 29 09:27:16 compute-0 systemd[1]: Reloading.
Jan 29 09:27:16 compute-0 systemd-sysv-generator[221413]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:27:16 compute-0 systemd-rc-local-generator[221408]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:27:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:27:16 compute-0 sudo[221379]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v497: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:16 compute-0 python3.9[221567]: ansible-ansible.builtin.service_facts Invoked
Jan 29 09:27:16 compute-0 network[221584]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 29 09:27:16 compute-0 network[221585]: 'network-scripts' will be removed from distribution in near future.
Jan 29 09:27:16 compute-0 network[221586]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 29 09:27:17 compute-0 ceph-mon[75183]: pgmap v497: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v498: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:19 compute-0 ceph-mon[75183]: pgmap v498: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v499: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:20 compute-0 sudo[221857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvhkbrgpqeqsdangyjcaarkzffzuddot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678840.664264-455-259672104781260/AnsiballZ_systemd_service.py'
Jan 29 09:27:20 compute-0 sudo[221857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:21 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:27:21 compute-0 python3.9[221859]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:27:21 compute-0 sudo[221857]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:21 compute-0 sudo[222010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbsmghizabxalbcujmqscbgvpsrsjesl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678841.4357903-455-95534837723302/AnsiballZ_systemd_service.py'
Jan 29 09:27:21 compute-0 sudo[222010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:21 compute-0 ceph-mon[75183]: pgmap v499: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:22 compute-0 python3.9[222012]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:27:22 compute-0 sudo[222010]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:22 compute-0 sudo[222163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axplygdguyngftjpmypisisdbshqinhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678842.1540477-455-18845890103697/AnsiballZ_systemd_service.py'
Jan 29 09:27:22 compute-0 sudo[222163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v500: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:22 compute-0 python3.9[222165]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:27:22 compute-0 sudo[222163]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:23 compute-0 sudo[222316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-povylagfengikshqjrvvwdjiqlbijtmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678842.8437278-455-227892195767578/AnsiballZ_systemd_service.py'
Jan 29 09:27:23 compute-0 sudo[222316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:23 compute-0 python3.9[222318]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:27:23 compute-0 sudo[222316]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:23 compute-0 sudo[222469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vesflpxkyqnffmahcnkeygdophvqhhps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678843.5773573-455-96364321454351/AnsiballZ_systemd_service.py'
Jan 29 09:27:23 compute-0 sudo[222469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:23 compute-0 ceph-mon[75183]: pgmap v500: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:24 compute-0 python3.9[222471]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:27:24 compute-0 sudo[222469]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:24 compute-0 sudo[222622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrtvhirgfwnkusmxsbjqclrcweatxjqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678844.2636504-455-101406682931389/AnsiballZ_systemd_service.py'
Jan 29 09:27:24 compute-0 sudo[222622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v501: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:24 compute-0 python3.9[222624]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:27:24 compute-0 sudo[222622]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:25 compute-0 sudo[222775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qixjuxcynkroggzbnniocdipzvtwedoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678845.001789-455-35676643483269/AnsiballZ_systemd_service.py'
Jan 29 09:27:25 compute-0 sudo[222775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:25 compute-0 python3.9[222777]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:27:25 compute-0 sudo[222775]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:25 compute-0 ceph-mon[75183]: pgmap v501: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:26 compute-0 sudo[222928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geeewesewyuaridbxcqubdpcbgkleaik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678845.7550497-455-104596853565881/AnsiballZ_systemd_service.py'
Jan 29 09:27:26 compute-0 sudo[222928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:26 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:27:26 compute-0 python3.9[222930]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:27:26 compute-0 sudo[222928]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:27:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:27:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:27:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:27:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:27:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:27:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v502: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:26 compute-0 sudo[223081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvgvbmfgqsvodyaccsjivluvzonfgewx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678846.6089146-514-217165280403845/AnsiballZ_file.py'
Jan 29 09:27:26 compute-0 sudo[223081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:27 compute-0 python3.9[223083]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:27:27 compute-0 sudo[223081]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:27 compute-0 sudo[223233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnbpqoakpxbregbozmcloksvzqgzkwrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678847.1957192-514-161201859072751/AnsiballZ_file.py'
Jan 29 09:27:27 compute-0 sudo[223233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:27 compute-0 python3.9[223235]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:27:27 compute-0 sudo[223233]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:27 compute-0 ceph-mon[75183]: pgmap v502: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:28 compute-0 sudo[223385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftqrjobtxlcjnldohrufntumhdnmznnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678847.7695026-514-225187798420976/AnsiballZ_file.py'
Jan 29 09:27:28 compute-0 sudo[223385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:28 compute-0 python3.9[223387]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:27:28 compute-0 sudo[223385]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v503: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:28 compute-0 sudo[223537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kojbjlpcnupvgkvpjpirpkmwpdxcvqli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678848.3878906-514-266500107051071/AnsiballZ_file.py'
Jan 29 09:27:28 compute-0 sudo[223537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:28 compute-0 python3.9[223539]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:27:28 compute-0 sudo[223537]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:29 compute-0 sudo[223689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eulahfxqbrqyvsgfreqfbvekgvqnwleb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678848.9655335-514-107558465208120/AnsiballZ_file.py'
Jan 29 09:27:29 compute-0 sudo[223689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:29 compute-0 sudo[223692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:27:29 compute-0 sudo[223692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:27:29 compute-0 sudo[223692]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:29 compute-0 sudo[223717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:27:29 compute-0 sudo[223717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:27:29 compute-0 python3.9[223691]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:27:29 compute-0 sudo[223689]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:29 compute-0 sudo[223910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbouezguemmylxinsrzpazzocgnlrfcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678849.5091684-514-208455678497230/AnsiballZ_file.py'
Jan 29 09:27:29 compute-0 sudo[223910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:29 compute-0 sudo[223717]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:27:29 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:27:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:27:29 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:27:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:27:29 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:27:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:27:29 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:27:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:27:29 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:27:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:27:29 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:27:29 compute-0 sudo[223925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:27:29 compute-0 sudo[223925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:27:29 compute-0 sudo[223925]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:29 compute-0 python3.9[223912]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:27:29 compute-0 sudo[223950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:27:29 compute-0 sudo[223950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:27:29 compute-0 sudo[223910]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:30 compute-0 ceph-mon[75183]: pgmap v503: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:30 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:27:30 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:27:30 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:27:30 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:27:30 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:27:30 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:27:30 compute-0 podman[224086]: 2026-01-29 09:27:30.278392292 +0000 UTC m=+0.049026186 container create a3a7edf796fe19494697160fc3aefc84087ca3210b2a63ff30c2467d1eb8a455 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:27:30 compute-0 systemd[1]: Started libpod-conmon-a3a7edf796fe19494697160fc3aefc84087ca3210b2a63ff30c2467d1eb8a455.scope.
Jan 29 09:27:30 compute-0 sudo[224152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohbsalheumfowlglsxgvqjwwqmdrkcbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678850.0913851-514-222696599416254/AnsiballZ_file.py'
Jan 29 09:27:30 compute-0 sudo[224152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:30 compute-0 podman[224086]: 2026-01-29 09:27:30.253186696 +0000 UTC m=+0.023820620 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:27:30 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:27:30 compute-0 podman[224086]: 2026-01-29 09:27:30.372336159 +0000 UTC m=+0.142970073 container init a3a7edf796fe19494697160fc3aefc84087ca3210b2a63ff30c2467d1eb8a455 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_haslett, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 29 09:27:30 compute-0 podman[224086]: 2026-01-29 09:27:30.38043598 +0000 UTC m=+0.151069874 container start a3a7edf796fe19494697160fc3aefc84087ca3210b2a63ff30c2467d1eb8a455 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 29 09:27:30 compute-0 systemd[1]: libpod-a3a7edf796fe19494697160fc3aefc84087ca3210b2a63ff30c2467d1eb8a455.scope: Deactivated successfully.
Jan 29 09:27:30 compute-0 romantic_haslett[224153]: 167 167
Jan 29 09:27:30 compute-0 podman[224086]: 2026-01-29 09:27:30.386721421 +0000 UTC m=+0.157355345 container attach a3a7edf796fe19494697160fc3aefc84087ca3210b2a63ff30c2467d1eb8a455 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 09:27:30 compute-0 conmon[224153]: conmon a3a7edf796fe19494697 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a3a7edf796fe19494697160fc3aefc84087ca3210b2a63ff30c2467d1eb8a455.scope/container/memory.events
Jan 29 09:27:30 compute-0 podman[224086]: 2026-01-29 09:27:30.388282373 +0000 UTC m=+0.158916267 container died a3a7edf796fe19494697160fc3aefc84087ca3210b2a63ff30c2467d1eb8a455 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_haslett, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:27:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-1130a1a3c97ca70ce85941700b600757bdcf4a33540ba615cf7629a4c23d9568-merged.mount: Deactivated successfully.
Jan 29 09:27:30 compute-0 podman[224086]: 2026-01-29 09:27:30.447534046 +0000 UTC m=+0.218167930 container remove a3a7edf796fe19494697160fc3aefc84087ca3210b2a63ff30c2467d1eb8a455 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_haslett, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 09:27:30 compute-0 systemd[1]: libpod-conmon-a3a7edf796fe19494697160fc3aefc84087ca3210b2a63ff30c2467d1eb8a455.scope: Deactivated successfully.
Jan 29 09:27:30 compute-0 python3.9[224157]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:27:30 compute-0 sudo[224152]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:30 compute-0 podman[224182]: 2026-01-29 09:27:30.5887388 +0000 UTC m=+0.046709812 container create 4c1f0dbaff380c6de48e5764661cb9beb306e85fcb9cd71fb2e0f603fb0a032b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_nightingale, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 09:27:30 compute-0 systemd[1]: Started libpod-conmon-4c1f0dbaff380c6de48e5764661cb9beb306e85fcb9cd71fb2e0f603fb0a032b.scope.
Jan 29 09:27:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v504: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:30 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ae206eb7c87a80e10a909ba8f9aad3ce00162bf89fe69af4dd54fc407f2694d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ae206eb7c87a80e10a909ba8f9aad3ce00162bf89fe69af4dd54fc407f2694d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ae206eb7c87a80e10a909ba8f9aad3ce00162bf89fe69af4dd54fc407f2694d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ae206eb7c87a80e10a909ba8f9aad3ce00162bf89fe69af4dd54fc407f2694d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ae206eb7c87a80e10a909ba8f9aad3ce00162bf89fe69af4dd54fc407f2694d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:27:30 compute-0 podman[224182]: 2026-01-29 09:27:30.56704788 +0000 UTC m=+0.025018882 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:27:30 compute-0 podman[224182]: 2026-01-29 09:27:30.685619978 +0000 UTC m=+0.143591010 container init 4c1f0dbaff380c6de48e5764661cb9beb306e85fcb9cd71fb2e0f603fb0a032b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 09:27:30 compute-0 podman[224182]: 2026-01-29 09:27:30.692491265 +0000 UTC m=+0.150462267 container start 4c1f0dbaff380c6de48e5764661cb9beb306e85fcb9cd71fb2e0f603fb0a032b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_nightingale, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:27:30 compute-0 podman[224182]: 2026-01-29 09:27:30.697251784 +0000 UTC m=+0.155222806 container attach 4c1f0dbaff380c6de48e5764661cb9beb306e85fcb9cd71fb2e0f603fb0a032b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_nightingale, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:27:30 compute-0 sudo[224354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiyfordpiufjbycmjczoicnxnhqqgnpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678850.6693027-514-38726302992849/AnsiballZ_file.py'
Jan 29 09:27:30 compute-0 sudo[224354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:31 compute-0 peaceful_nightingale[224222]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:27:31 compute-0 peaceful_nightingale[224222]: --> All data devices are unavailable
Jan 29 09:27:31 compute-0 python3.9[224358]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:27:31 compute-0 sudo[224354]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:31 compute-0 systemd[1]: libpod-4c1f0dbaff380c6de48e5764661cb9beb306e85fcb9cd71fb2e0f603fb0a032b.scope: Deactivated successfully.
Jan 29 09:27:31 compute-0 podman[224370]: 2026-01-29 09:27:31.181355323 +0000 UTC m=+0.026229385 container died 4c1f0dbaff380c6de48e5764661cb9beb306e85fcb9cd71fb2e0f603fb0a032b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 09:27:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ae206eb7c87a80e10a909ba8f9aad3ce00162bf89fe69af4dd54fc407f2694d-merged.mount: Deactivated successfully.
Jan 29 09:27:31 compute-0 podman[224370]: 2026-01-29 09:27:31.226792869 +0000 UTC m=+0.071666901 container remove 4c1f0dbaff380c6de48e5764661cb9beb306e85fcb9cd71fb2e0f603fb0a032b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_nightingale, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 29 09:27:31 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:27:31 compute-0 systemd[1]: libpod-conmon-4c1f0dbaff380c6de48e5764661cb9beb306e85fcb9cd71fb2e0f603fb0a032b.scope: Deactivated successfully.
Jan 29 09:27:31 compute-0 sudo[223950]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:31 compute-0 sudo[224432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:27:31 compute-0 sudo[224432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:27:31 compute-0 sudo[224432]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:31 compute-0 sudo[224483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:27:31 compute-0 sudo[224483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:27:31 compute-0 sudo[224584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfcbgafyzptcvysmnjxdotemmbpcorql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678851.2876365-571-28394059491204/AnsiballZ_file.py'
Jan 29 09:27:31 compute-0 sudo[224584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:31 compute-0 podman[224599]: 2026-01-29 09:27:31.682099814 +0000 UTC m=+0.048334007 container create 94a07c3791334c5fbce258f2c497ab26547b35b23ced9f9ec45c91b367f4c601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_williamson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 09:27:31 compute-0 systemd[1]: Started libpod-conmon-94a07c3791334c5fbce258f2c497ab26547b35b23ced9f9ec45c91b367f4c601.scope.
Jan 29 09:27:31 compute-0 podman[224599]: 2026-01-29 09:27:31.65697105 +0000 UTC m=+0.023205273 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:27:31 compute-0 python3.9[224586]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:27:31 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:27:31 compute-0 sudo[224584]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:31 compute-0 podman[224599]: 2026-01-29 09:27:31.774711015 +0000 UTC m=+0.140945228 container init 94a07c3791334c5fbce258f2c497ab26547b35b23ced9f9ec45c91b367f4c601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_williamson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:27:31 compute-0 podman[224599]: 2026-01-29 09:27:31.780724899 +0000 UTC m=+0.146959092 container start 94a07c3791334c5fbce258f2c497ab26547b35b23ced9f9ec45c91b367f4c601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 09:27:31 compute-0 suspicious_williamson[224616]: 167 167
Jan 29 09:27:31 compute-0 systemd[1]: libpod-94a07c3791334c5fbce258f2c497ab26547b35b23ced9f9ec45c91b367f4c601.scope: Deactivated successfully.
Jan 29 09:27:31 compute-0 podman[224599]: 2026-01-29 09:27:31.785849378 +0000 UTC m=+0.152083561 container attach 94a07c3791334c5fbce258f2c497ab26547b35b23ced9f9ec45c91b367f4c601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_williamson, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:27:31 compute-0 podman[224599]: 2026-01-29 09:27:31.786173057 +0000 UTC m=+0.152407250 container died 94a07c3791334c5fbce258f2c497ab26547b35b23ced9f9ec45c91b367f4c601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 29 09:27:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-5531d3c3ef4103a5c7682a0a628da9346ede1702e23e805ba4de12c96b53db8a-merged.mount: Deactivated successfully.
Jan 29 09:27:31 compute-0 podman[224599]: 2026-01-29 09:27:31.822309651 +0000 UTC m=+0.188543844 container remove 94a07c3791334c5fbce258f2c497ab26547b35b23ced9f9ec45c91b367f4c601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 29 09:27:31 compute-0 systemd[1]: libpod-conmon-94a07c3791334c5fbce258f2c497ab26547b35b23ced9f9ec45c91b367f4c601.scope: Deactivated successfully.
Jan 29 09:27:31 compute-0 podman[224690]: 2026-01-29 09:27:31.967506983 +0000 UTC m=+0.044462411 container create f955b15e2ec6d7fa9629b038307f1be66b86c0097f10be02a7d77e5ebbb2fcd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 29 09:27:32 compute-0 systemd[1]: Started libpod-conmon-f955b15e2ec6d7fa9629b038307f1be66b86c0097f10be02a7d77e5ebbb2fcd9.scope.
Jan 29 09:27:32 compute-0 ceph-mon[75183]: pgmap v504: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:32 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2e7fb998c09d06a4ca9a721b778db2cf6c6036f639a4d616f1a800175147a78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2e7fb998c09d06a4ca9a721b778db2cf6c6036f639a4d616f1a800175147a78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2e7fb998c09d06a4ca9a721b778db2cf6c6036f639a4d616f1a800175147a78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2e7fb998c09d06a4ca9a721b778db2cf6c6036f639a4d616f1a800175147a78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:27:32 compute-0 podman[224690]: 2026-01-29 09:27:31.947700184 +0000 UTC m=+0.024655632 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:27:32 compute-0 podman[224690]: 2026-01-29 09:27:32.061602745 +0000 UTC m=+0.138558193 container init f955b15e2ec6d7fa9629b038307f1be66b86c0097f10be02a7d77e5ebbb2fcd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_stonebraker, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:27:32 compute-0 podman[224690]: 2026-01-29 09:27:32.06839491 +0000 UTC m=+0.145350338 container start f955b15e2ec6d7fa9629b038307f1be66b86c0097f10be02a7d77e5ebbb2fcd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:27:32 compute-0 podman[224690]: 2026-01-29 09:27:32.076613923 +0000 UTC m=+0.153569371 container attach f955b15e2ec6d7fa9629b038307f1be66b86c0097f10be02a7d77e5ebbb2fcd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_stonebraker, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 09:27:32 compute-0 sudo[224810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuuphsrdzakgzldvuacuadczspbhdets ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678851.8798983-571-52307899539822/AnsiballZ_file.py'
Jan 29 09:27:32 compute-0 sudo[224810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:32 compute-0 python3.9[224812]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]: {
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:     "0": [
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:         {
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "devices": [
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "/dev/loop3"
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             ],
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "lv_name": "ceph_lv0",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "lv_size": "21470642176",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "name": "ceph_lv0",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "tags": {
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.cluster_name": "ceph",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.crush_device_class": "",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.encrypted": "0",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.objectstore": "bluestore",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.osd_id": "0",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.type": "block",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.vdo": "0",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.with_tpm": "0"
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             },
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "type": "block",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "vg_name": "ceph_vg0"
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:         }
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:     ],
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:     "1": [
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:         {
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "devices": [
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "/dev/loop4"
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             ],
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "lv_name": "ceph_lv1",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "lv_size": "21470642176",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "name": "ceph_lv1",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "tags": {
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.cluster_name": "ceph",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.crush_device_class": "",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.encrypted": "0",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.objectstore": "bluestore",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.osd_id": "1",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.type": "block",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.vdo": "0",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.with_tpm": "0"
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             },
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "type": "block",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "vg_name": "ceph_vg1"
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:         }
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:     ],
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:     "2": [
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:         {
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "devices": [
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "/dev/loop5"
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             ],
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "lv_name": "ceph_lv2",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "lv_size": "21470642176",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "name": "ceph_lv2",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "tags": {
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.cluster_name": "ceph",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.crush_device_class": "",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.encrypted": "0",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.objectstore": "bluestore",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.osd_id": "2",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.type": "block",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.vdo": "0",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:                 "ceph.with_tpm": "0"
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             },
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "type": "block",
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:             "vg_name": "ceph_vg2"
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:         }
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]:     ]
Jan 29 09:27:32 compute-0 zen_stonebraker[224755]: }
Jan 29 09:27:32 compute-0 sudo[224810]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:32 compute-0 systemd[1]: libpod-f955b15e2ec6d7fa9629b038307f1be66b86c0097f10be02a7d77e5ebbb2fcd9.scope: Deactivated successfully.
Jan 29 09:27:32 compute-0 podman[224690]: 2026-01-29 09:27:32.398127246 +0000 UTC m=+0.475082674 container died f955b15e2ec6d7fa9629b038307f1be66b86c0097f10be02a7d77e5ebbb2fcd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_stonebraker, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:27:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2e7fb998c09d06a4ca9a721b778db2cf6c6036f639a4d616f1a800175147a78-merged.mount: Deactivated successfully.
Jan 29 09:27:32 compute-0 podman[224690]: 2026-01-29 09:27:32.449781162 +0000 UTC m=+0.526736580 container remove f955b15e2ec6d7fa9629b038307f1be66b86c0097f10be02a7d77e5ebbb2fcd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_stonebraker, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 29 09:27:32 compute-0 systemd[1]: libpod-conmon-f955b15e2ec6d7fa9629b038307f1be66b86c0097f10be02a7d77e5ebbb2fcd9.scope: Deactivated successfully.
Jan 29 09:27:32 compute-0 sudo[224483]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:32 compute-0 sudo[224885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:27:32 compute-0 sudo[224885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:27:32 compute-0 sudo[224885]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:32 compute-0 sudo[224933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:27:32 compute-0 sudo[224933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:27:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v505: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:32 compute-0 sudo[225028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfywmcjsbswjyoezslrywwzzrrtkadmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678852.4785066-571-76116806952721/AnsiballZ_file.py'
Jan 29 09:27:32 compute-0 sudo[225028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:32 compute-0 podman[225043]: 2026-01-29 09:27:32.899064711 +0000 UTC m=+0.035992490 container create 18d2dfda155b079bb55a1e214233be404e421b7027e9c31c35d75f6614de1cc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 09:27:32 compute-0 systemd[1]: Started libpod-conmon-18d2dfda155b079bb55a1e214233be404e421b7027e9c31c35d75f6614de1cc3.scope.
Jan 29 09:27:32 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:27:32 compute-0 podman[225043]: 2026-01-29 09:27:32.961804339 +0000 UTC m=+0.098732138 container init 18d2dfda155b079bb55a1e214233be404e421b7027e9c31c35d75f6614de1cc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 29 09:27:32 compute-0 python3.9[225030]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:27:32 compute-0 podman[225043]: 2026-01-29 09:27:32.968989185 +0000 UTC m=+0.105916964 container start 18d2dfda155b079bb55a1e214233be404e421b7027e9c31c35d75f6614de1cc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_clarke, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 09:27:32 compute-0 stupefied_clarke[225059]: 167 167
Jan 29 09:27:32 compute-0 systemd[1]: libpod-18d2dfda155b079bb55a1e214233be404e421b7027e9c31c35d75f6614de1cc3.scope: Deactivated successfully.
Jan 29 09:27:32 compute-0 podman[225043]: 2026-01-29 09:27:32.976394156 +0000 UTC m=+0.113321955 container attach 18d2dfda155b079bb55a1e214233be404e421b7027e9c31c35d75f6614de1cc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_clarke, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:27:32 compute-0 podman[225043]: 2026-01-29 09:27:32.977080445 +0000 UTC m=+0.114008224 container died 18d2dfda155b079bb55a1e214233be404e421b7027e9c31c35d75f6614de1cc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_clarke, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 09:27:32 compute-0 podman[225043]: 2026-01-29 09:27:32.883432626 +0000 UTC m=+0.020360435 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:27:32 compute-0 sudo[225028]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ce617bf4081647c9a823ffc65efba571cb4eace530c382bec4e76a7ba830656-merged.mount: Deactivated successfully.
Jan 29 09:27:33 compute-0 podman[225043]: 2026-01-29 09:27:33.032659428 +0000 UTC m=+0.169587207 container remove 18d2dfda155b079bb55a1e214233be404e421b7027e9c31c35d75f6614de1cc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_clarke, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:27:33 compute-0 systemd[1]: libpod-conmon-18d2dfda155b079bb55a1e214233be404e421b7027e9c31c35d75f6614de1cc3.scope: Deactivated successfully.
Jan 29 09:27:33 compute-0 podman[225133]: 2026-01-29 09:27:33.16606784 +0000 UTC m=+0.043528466 container create e62db1bf4f0fa45f4a72d1c68cd166633c4cc7f7110bdf159f6fc87f4ba1c557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_ganguly, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 09:27:33 compute-0 systemd[1]: Started libpod-conmon-e62db1bf4f0fa45f4a72d1c68cd166633c4cc7f7110bdf159f6fc87f4ba1c557.scope.
Jan 29 09:27:33 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:27:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79313b368f46aaebc7d0c3c045b01d2c65caa86914a8243d3de4d3a89b9b5ad7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:27:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79313b368f46aaebc7d0c3c045b01d2c65caa86914a8243d3de4d3a89b9b5ad7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:27:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79313b368f46aaebc7d0c3c045b01d2c65caa86914a8243d3de4d3a89b9b5ad7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:27:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79313b368f46aaebc7d0c3c045b01d2c65caa86914a8243d3de4d3a89b9b5ad7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:27:33 compute-0 podman[225133]: 2026-01-29 09:27:33.146520248 +0000 UTC m=+0.023980904 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:27:33 compute-0 podman[225133]: 2026-01-29 09:27:33.249049659 +0000 UTC m=+0.126510305 container init e62db1bf4f0fa45f4a72d1c68cd166633c4cc7f7110bdf159f6fc87f4ba1c557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 29 09:27:33 compute-0 podman[225133]: 2026-01-29 09:27:33.255659089 +0000 UTC m=+0.133119715 container start e62db1bf4f0fa45f4a72d1c68cd166633c4cc7f7110bdf159f6fc87f4ba1c557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_ganguly, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 29 09:27:33 compute-0 podman[225133]: 2026-01-29 09:27:33.259681058 +0000 UTC m=+0.137141684 container attach e62db1bf4f0fa45f4a72d1c68cd166633c4cc7f7110bdf159f6fc87f4ba1c557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 29 09:27:33 compute-0 sudo[225255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwflwwmaqoyvqkarxscepuesfswppoxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678853.0887537-571-251331269786277/AnsiballZ_file.py'
Jan 29 09:27:33 compute-0 sudo[225255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:33 compute-0 python3.9[225257]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:27:33 compute-0 sudo[225255]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:33 compute-0 sudo[225479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifgwlxeinwmavungonfjrznghkklejul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678853.6761744-571-218100932093549/AnsiballZ_file.py'
Jan 29 09:27:33 compute-0 sudo[225479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:33 compute-0 lvm[225481]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:27:33 compute-0 lvm[225481]: VG ceph_vg1 finished
Jan 29 09:27:33 compute-0 lvm[225482]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:27:33 compute-0 lvm[225482]: VG ceph_vg0 finished
Jan 29 09:27:33 compute-0 lvm[225486]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:27:33 compute-0 lvm[225486]: VG ceph_vg2 finished
Jan 29 09:27:34 compute-0 lvm[225487]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:27:34 compute-0 lvm[225487]: VG ceph_vg0 finished
Jan 29 09:27:34 compute-0 ceph-mon[75183]: pgmap v505: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:34 compute-0 silly_ganguly[225200]: {}
Jan 29 09:27:34 compute-0 systemd[1]: libpod-e62db1bf4f0fa45f4a72d1c68cd166633c4cc7f7110bdf159f6fc87f4ba1c557.scope: Deactivated successfully.
Jan 29 09:27:34 compute-0 systemd[1]: libpod-e62db1bf4f0fa45f4a72d1c68cd166633c4cc7f7110bdf159f6fc87f4ba1c557.scope: Consumed 1.222s CPU time.
Jan 29 09:27:34 compute-0 podman[225133]: 2026-01-29 09:27:34.127532653 +0000 UTC m=+1.004993279 container died e62db1bf4f0fa45f4a72d1c68cd166633c4cc7f7110bdf159f6fc87f4ba1c557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_ganguly, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:27:34 compute-0 python3.9[225484]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:27:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-79313b368f46aaebc7d0c3c045b01d2c65caa86914a8243d3de4d3a89b9b5ad7-merged.mount: Deactivated successfully.
Jan 29 09:27:34 compute-0 sudo[225479]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:34 compute-0 podman[225133]: 2026-01-29 09:27:34.179451106 +0000 UTC m=+1.056911732 container remove e62db1bf4f0fa45f4a72d1c68cd166633c4cc7f7110bdf159f6fc87f4ba1c557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_ganguly, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 29 09:27:34 compute-0 systemd[1]: libpod-conmon-e62db1bf4f0fa45f4a72d1c68cd166633c4cc7f7110bdf159f6fc87f4ba1c557.scope: Deactivated successfully.
Jan 29 09:27:34 compute-0 sudo[224933]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:34 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:27:34 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:27:34 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:27:34 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:27:34 compute-0 sudo[225531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:27:34 compute-0 sudo[225531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:27:34 compute-0 sudo[225531]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:34 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 29 09:27:34 compute-0 sudo[225677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-popmglpvkymmlnlvksofvtasbriccjjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678854.277266-571-1897741863153/AnsiballZ_file.py'
Jan 29 09:27:34 compute-0 sudo[225677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v506: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:34 compute-0 python3.9[225679]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:27:34 compute-0 sudo[225677]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:35 compute-0 sudo[225829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfobllkymwbflooyusooahqtepjkrfka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678854.8355818-571-181925917386028/AnsiballZ_file.py'
Jan 29 09:27:35 compute-0 sudo[225829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:35 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:27:35 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:27:35 compute-0 python3.9[225831]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:27:35.263320) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678855263362, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2044, "num_deletes": 251, "total_data_size": 2365011, "memory_usage": 2415160, "flush_reason": "Manual Compaction"}
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 29 09:27:35 compute-0 sudo[225829]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678855291851, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2293121, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9007, "largest_seqno": 11050, "table_properties": {"data_size": 2283943, "index_size": 5800, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17934, "raw_average_key_size": 19, "raw_value_size": 2265551, "raw_average_value_size": 2462, "num_data_blocks": 267, "num_entries": 920, "num_filter_entries": 920, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769678616, "oldest_key_time": 1769678616, "file_creation_time": 1769678855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 28595 microseconds, and 4505 cpu microseconds.
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:27:35.291912) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2293121 bytes OK
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:27:35.291940) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:27:35.301806) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:27:35.301854) EVENT_LOG_v1 {"time_micros": 1769678855301845, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:27:35.301888) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2356476, prev total WAL file size 2373412, number of live WAL files 2.
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:27:35.302681) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2239KB)], [26(4611KB)]
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678855302879, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 7015255, "oldest_snapshot_seqno": -1}
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3214 keys, 5888537 bytes, temperature: kUnknown
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678855349252, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 5888537, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5863100, "index_size": 16304, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8069, "raw_key_size": 74366, "raw_average_key_size": 23, "raw_value_size": 5801628, "raw_average_value_size": 1805, "num_data_blocks": 722, "num_entries": 3214, "num_filter_entries": 3214, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677896, "oldest_key_time": 0, "file_creation_time": 1769678855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:27:35.349537) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 5888537 bytes
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:27:35.351712) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.0 rd, 126.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 4.5 +0.0 blob) out(5.6 +0.0 blob), read-write-amplify(5.6) write-amplify(2.6) OK, records in: 3728, records dropped: 514 output_compression: NoCompression
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:27:35.351736) EVENT_LOG_v1 {"time_micros": 1769678855351723, "job": 10, "event": "compaction_finished", "compaction_time_micros": 46448, "compaction_time_cpu_micros": 12896, "output_level": 6, "num_output_files": 1, "total_output_size": 5888537, "num_input_records": 3728, "num_output_records": 3214, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678855352340, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678855353065, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:27:35.302547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:27:35.353144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:27:35.353150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:27:35.353152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:27:35.353153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:27:35 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:27:35.353155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:27:35 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 29 09:27:35 compute-0 sudo[225982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqiwxhbjwpdlyysybzfyrqttwqxqzcjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678855.3688436-571-135772253228449/AnsiballZ_file.py'
Jan 29 09:27:35 compute-0 sudo[225982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:35 compute-0 python3.9[225984]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:27:35 compute-0 sudo[225982]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:36 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:27:36 compute-0 sudo[226134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytpdoladgehakrzrvrrpgypblhduhllk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678856.045113-629-255213082412445/AnsiballZ_command.py'
Jan 29 09:27:36 compute-0 sudo[226134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:36 compute-0 ceph-mon[75183]: pgmap v506: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:36 compute-0 python3.9[226136]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:27:36 compute-0 sudo[226134]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v507: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:37 compute-0 python3.9[226288]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 29 09:27:37 compute-0 ceph-mon[75183]: pgmap v507: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:37 compute-0 sudo[226438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjqgsntgjmcraxlcucsjsdhmwxnqpehj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678857.4348865-647-133146146971243/AnsiballZ_systemd_service.py'
Jan 29 09:27:37 compute-0 sudo[226438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:38 compute-0 python3.9[226440]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 29 09:27:38 compute-0 systemd[1]: Reloading.
Jan 29 09:27:38 compute-0 systemd-rc-local-generator[226463]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:27:38 compute-0 systemd-sysv-generator[226469]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:27:38 compute-0 sudo[226438]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:38 compute-0 podman[226475]: 2026-01-29 09:27:38.462065657 +0000 UTC m=+0.090441953 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 09:27:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v508: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:38 compute-0 sudo[226650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiqtovupggcblrorzifqhoolotsaocxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678858.4943266-655-164440068217245/AnsiballZ_command.py'
Jan 29 09:27:38 compute-0 sudo[226650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:38 compute-0 python3.9[226652]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:27:38 compute-0 sudo[226650]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:39 compute-0 sudo[226803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgbrlyegvlttpfwjdbxdepxxhwgvlnyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678859.093567-655-35087138141744/AnsiballZ_command.py'
Jan 29 09:27:39 compute-0 sudo[226803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:39 compute-0 python3.9[226805]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:27:39 compute-0 sudo[226803]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:39 compute-0 ceph-mon[75183]: pgmap v508: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:39 compute-0 sudo[226956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugpmwzyoazqqvjgtvzdpwigiczagibhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678859.6518948-655-2142586901375/AnsiballZ_command.py'
Jan 29 09:27:39 compute-0 sudo[226956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:40 compute-0 python3.9[226958]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:27:40 compute-0 sudo[226956]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:40 compute-0 sudo[227109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niqqwnjmrapjuzctsemjsilexwstrsjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678860.2127526-655-240242688739950/AnsiballZ_command.py'
Jan 29 09:27:40 compute-0 sudo[227109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v509: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:40 compute-0 python3.9[227111]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:27:40 compute-0 sudo[227109]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:41 compute-0 sudo[227262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txtynwodgyptvmfghehsphfzwbdyqqrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678860.8457236-655-17118556885986/AnsiballZ_command.py'
Jan 29 09:27:41 compute-0 sudo[227262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:41 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:27:41 compute-0 python3.9[227264]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:27:41 compute-0 sudo[227262]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:41 compute-0 sudo[227415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arfiigbzgvvetquqvqzcxsyfrgvechvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678861.4498-655-60972370197581/AnsiballZ_command.py'
Jan 29 09:27:41 compute-0 sudo[227415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:41 compute-0 ceph-mon[75183]: pgmap v509: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:41 compute-0 python3.9[227417]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:27:41 compute-0 sudo[227415]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:42 compute-0 sudo[227580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edawatrhxduzzoagjuguqzokcvrqgqwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678862.0065262-655-70111560119236/AnsiballZ_command.py'
Jan 29 09:27:42 compute-0 sudo[227580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:42 compute-0 podman[227542]: 2026-01-29 09:27:42.270245724 +0000 UTC m=+0.061577328 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 29 09:27:42 compute-0 python3.9[227587]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:27:42 compute-0 sudo[227580]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v510: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:42 compute-0 sudo[227740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jatdaykstbsuyqcyncaqyxmniuqvbduo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678862.5570738-655-134132642284945/AnsiballZ_command.py'
Jan 29 09:27:42 compute-0 sudo[227740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:43 compute-0 python3.9[227742]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 09:27:43 compute-0 sudo[227740]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:43 compute-0 ceph-mon[75183]: pgmap v510: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:44 compute-0 sudo[227893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-datnniyzhonczurwusptetzbpwjkvfwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678863.8860953-734-184699673175956/AnsiballZ_file.py'
Jan 29 09:27:44 compute-0 sudo[227893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:44 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 29 09:27:44 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 29 09:27:44 compute-0 python3.9[227895]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:27:44 compute-0 sudo[227893]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v511: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:44 compute-0 sudo[228047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqmyyiosnvnpjqhrbliawtjhrhbuyekm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678864.4923882-734-29331251026963/AnsiballZ_file.py'
Jan 29 09:27:44 compute-0 sudo[228047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:44 compute-0 python3.9[228049]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:27:44 compute-0 sudo[228047]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:45 compute-0 sudo[228199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrhmeuoerxikenkqooysgwtmjrzzxtcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678865.0920522-734-36830846467988/AnsiballZ_file.py'
Jan 29 09:27:45 compute-0 sudo[228199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:45 compute-0 python3.9[228201]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:27:45 compute-0 sudo[228199]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:45 compute-0 ceph-mon[75183]: pgmap v511: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:45 compute-0 sudo[228351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obmptlmlpywcyjpwrnnihzhkoledltog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678865.6827931-756-97526792979950/AnsiballZ_file.py'
Jan 29 09:27:45 compute-0 sudo[228351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:46 compute-0 python3.9[228353]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:27:46 compute-0 sudo[228351]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:46 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:27:46 compute-0 sudo[228503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bumkyqdltqvtgzcgjshpbdosdokxphdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678866.242306-756-164572457604913/AnsiballZ_file.py'
Jan 29 09:27:46 compute-0 sudo[228503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v512: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:46 compute-0 python3.9[228505]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:27:46 compute-0 sudo[228503]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:47 compute-0 sudo[228655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvchiaghfgxrpannxbgdtqpjlcsfejpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678866.8142645-756-145502234736861/AnsiballZ_file.py'
Jan 29 09:27:47 compute-0 sudo[228655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:47 compute-0 python3.9[228657]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:27:47 compute-0 sudo[228655]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:47 compute-0 sudo[228807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atiywdayunrdzakxzledvydhrrflffdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678867.3817651-756-104914214409598/AnsiballZ_file.py'
Jan 29 09:27:47 compute-0 sudo[228807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:47 compute-0 ceph-mon[75183]: pgmap v512: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:47 compute-0 python3.9[228809]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:27:47 compute-0 sudo[228807]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:48 compute-0 sudo[228959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oodlhfgcponxibfqhzzmczoqcweufjzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678867.9479153-756-66893053461819/AnsiballZ_file.py'
Jan 29 09:27:48 compute-0 sudo[228959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:48 compute-0 python3.9[228961]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:27:48 compute-0 sudo[228959]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v513: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:48 compute-0 sudo[229111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdmdnynsesprzryglvvgnacsbdtelpjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678868.5048673-756-73281340516096/AnsiballZ_file.py'
Jan 29 09:27:48 compute-0 sudo[229111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:48 compute-0 python3.9[229113]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:27:48 compute-0 sudo[229111]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:49 compute-0 sudo[229263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-livtavbwunfvngefpnnmpcexdkactlnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678869.0715435-756-264283894276471/AnsiballZ_file.py'
Jan 29 09:27:49 compute-0 sudo[229263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:49 compute-0 python3.9[229265]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:27:49 compute-0 sudo[229263]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:49 compute-0 ceph-mon[75183]: pgmap v513: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v514: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:27:51 compute-0 ceph-mon[75183]: pgmap v514: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v515: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:53 compute-0 ceph-mon[75183]: pgmap v515: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:54 compute-0 sudo[229415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywyqkmuzyswoueabhrjsnhtmzhloxnix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678874.0473473-945-78022223985808/AnsiballZ_getent.py'
Jan 29 09:27:54 compute-0 sudo[229415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v516: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:54 compute-0 python3.9[229417]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 29 09:27:54 compute-0 sudo[229415]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:55 compute-0 sudo[229568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwmiyrhexqwalqjhrtvjkhopvvqkzqvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678874.8100498-953-269854692334159/AnsiballZ_group.py'
Jan 29 09:27:55 compute-0 sudo[229568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:55 compute-0 python3.9[229570]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 29 09:27:55 compute-0 groupadd[229571]: group added to /etc/group: name=nova, GID=42436
Jan 29 09:27:55 compute-0 groupadd[229571]: group added to /etc/gshadow: name=nova
Jan 29 09:27:55 compute-0 groupadd[229571]: new group: name=nova, GID=42436
Jan 29 09:27:55 compute-0 sudo[229568]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:55 compute-0 ceph-mon[75183]: pgmap v516: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:27:55
Jan 29 09:27:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:27:55 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:27:55 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['volumes', 'images', '.mgr', 'vms', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data']
Jan 29 09:27:55 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:27:56 compute-0 sudo[229726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unbjxsgmqdcbhyyxzfprhgwraztqpdjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678875.6458573-961-194051903901205/AnsiballZ_user.py'
Jan 29 09:27:56 compute-0 sudo[229726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:27:56 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:27:56 compute-0 python3.9[229728]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 29 09:27:56 compute-0 useradd[229730]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 29 09:27:56 compute-0 useradd[229730]: add 'nova' to group 'libvirt'
Jan 29 09:27:56 compute-0 useradd[229730]: add 'nova' to shadow group 'libvirt'
Jan 29 09:27:56 compute-0 sudo[229726]: pam_unix(sudo:session): session closed for user root
Jan 29 09:27:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:27:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:27:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:27:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:27:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:27:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:27:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v517: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:27:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:27:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:27:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:27:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:27:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:27:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:27:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:27:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:27:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:27:57 compute-0 sshd-session[229761]: Accepted publickey for zuul from 192.168.122.30 port 39232 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:27:57 compute-0 systemd-logind[799]: New session 50 of user zuul.
Jan 29 09:27:57 compute-0 systemd[1]: Started Session 50 of User zuul.
Jan 29 09:27:57 compute-0 sshd-session[229761]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:27:57 compute-0 sshd-session[229764]: Received disconnect from 192.168.122.30 port 39232:11: disconnected by user
Jan 29 09:27:57 compute-0 sshd-session[229764]: Disconnected from user zuul 192.168.122.30 port 39232
Jan 29 09:27:57 compute-0 sshd-session[229761]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:27:57 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Jan 29 09:27:57 compute-0 systemd-logind[799]: Session 50 logged out. Waiting for processes to exit.
Jan 29 09:27:57 compute-0 systemd-logind[799]: Removed session 50.
Jan 29 09:27:57 compute-0 ceph-mon[75183]: pgmap v517: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:58 compute-0 python3.9[229914]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:27:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v518: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:27:58 compute-0 python3.9[230035]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769678877.738986-986-187775561650887/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:27:59 compute-0 python3.9[230185]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:27:59 compute-0 python3.9[230261]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:27:59 compute-0 ceph-mon[75183]: pgmap v518: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:00 compute-0 python3.9[230411]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:28:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v519: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:00 compute-0 python3.9[230532]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769678879.8640277-986-278567720097225/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:28:01 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:28:01 compute-0 python3.9[230682]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:28:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:28:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:28:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:28:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:28:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:28:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:28:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:28:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:28:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:28:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:28:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:28:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:28:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0578630957479565e-06 of space, bias 4.0, pg target 0.0012694357148975478 quantized to 16 (current 32)
Jan 29 09:28:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:28:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:28:01 compute-0 ceph-mon[75183]: pgmap v519: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:01 compute-0 python3.9[230803]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769678880.963221-986-16605732526445/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:28:02 compute-0 python3.9[230953]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:28:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v520: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:03 compute-0 python3.9[231074]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769678882.0841436-986-155851446029664/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:28:03 compute-0 python3.9[231224]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:28:03 compute-0 ceph-mon[75183]: pgmap v520: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:04 compute-0 python3.9[231345]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769678883.2227278-986-262114344883759/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:28:04 compute-0 sudo[231495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irhkzqlnokkedyyqlkqwvihkifcexdnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678884.3499706-1069-106945213123145/AnsiballZ_file.py'
Jan 29 09:28:04 compute-0 sudo[231495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:28:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v521: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:04 compute-0 python3.9[231497]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:28:04 compute-0 sudo[231495]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:05 compute-0 sudo[231647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blkzfglknppdoxseploqinvyogdqivbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678884.9388359-1077-215805264970422/AnsiballZ_copy.py'
Jan 29 09:28:05 compute-0 sudo[231647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:28:05 compute-0 python3.9[231649]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:28:05 compute-0 sudo[231647]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:05 compute-0 sudo[231799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smqgqxxeooigqviwqtpskfvgvosueoto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678885.5755653-1085-125643297875472/AnsiballZ_stat.py'
Jan 29 09:28:05 compute-0 sudo[231799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:28:05 compute-0 ceph-mon[75183]: pgmap v521: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:06 compute-0 python3.9[231801]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:28:06 compute-0 sudo[231799]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:28:06 compute-0 sudo[231951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgjpzbhamswdsvbezapwzmbkwtuokezv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678886.2004788-1093-218196437593824/AnsiballZ_stat.py'
Jan 29 09:28:06 compute-0 sudo[231951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:28:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v522: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:06 compute-0 python3.9[231953]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:28:06 compute-0 sudo[231951]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:07 compute-0 sudo[232074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngxckudlxukdmzfhqivgkvwtypnyghvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678886.2004788-1093-218196437593824/AnsiballZ_copy.py'
Jan 29 09:28:07 compute-0 sudo[232074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:28:07 compute-0 python3.9[232076]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769678886.2004788-1093-218196437593824/.source _original_basename=.j_1oocv8 follow=False checksum=b3abf4876d785aa7a68d273bcf9b94053500e336 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 29 09:28:07 compute-0 sudo[232074]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:07 compute-0 python3.9[232228]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:28:08 compute-0 ceph-mon[75183]: pgmap v522: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:08 compute-0 python3.9[232380]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:28:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v523: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:08 compute-0 podman[232475]: 2026-01-29 09:28:08.851241059 +0000 UTC m=+0.111242950 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 29 09:28:08 compute-0 python3.9[232512]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769678888.0833175-1119-188705723637610/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:28:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:28:09.029 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:28:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:28:09.030 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:28:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:28:09.030 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:28:09 compute-0 python3.9[232678]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 09:28:10 compute-0 python3.9[232799]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769678889.0876765-1134-150233219211829/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 09:28:10 compute-0 ceph-mon[75183]: pgmap v523: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v524: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:10 compute-0 sudo[232949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fssimqdrhauqrwzuuscuiehpfumxumtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678890.3131428-1151-62792719117997/AnsiballZ_container_config_data.py'
Jan 29 09:28:10 compute-0 sudo[232949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:28:10 compute-0 python3.9[232951]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 29 09:28:11 compute-0 sudo[232949]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:11 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:28:11 compute-0 sudo[233101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xazkrlguurdocxrhkferqsvvgrifupyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678891.25633-1162-280541863443420/AnsiballZ_container_config_hash.py'
Jan 29 09:28:11 compute-0 sudo[233101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:28:11 compute-0 python3.9[233103]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 29 09:28:11 compute-0 sudo[233101]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:12 compute-0 ceph-mon[75183]: pgmap v524: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v525: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:12 compute-0 sudo[233266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkvgcdegmdmzeimavfvsafowvgiiyaun ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769678892.2621913-1172-62699078233832/AnsiballZ_edpm_container_manage.py'
Jan 29 09:28:12 compute-0 sudo[233266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:28:12 compute-0 podman[233227]: 2026-01-29 09:28:12.72133484 +0000 UTC m=+0.056982148 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 09:28:12 compute-0 python3[233273]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 29 09:28:14 compute-0 ceph-mon[75183]: pgmap v525: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v526: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:16 compute-0 ceph-mon[75183]: pgmap v526: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:28:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v527: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v528: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:19 compute-0 ceph-mon[75183]: pgmap v527: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v529: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:28:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v530: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:22 compute-0 ceph-mon[75183]: pgmap v528: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:24 compute-0 ceph-mon[75183]: pgmap v529: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:24 compute-0 ceph-mon[75183]: pgmap v530: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v531: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:26 compute-0 ceph-mon[75183]: pgmap v531: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:26 compute-0 podman[233288]: 2026-01-29 09:28:26.433652527 +0000 UTC m=+13.371833452 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 29 09:28:26 compute-0 podman[233374]: 2026-01-29 09:28:26.561475015 +0000 UTC m=+0.054276115 container create 5f1c91b3e7d0071b4a477a19ad9af9e77d526a726d85ac36e0e2fa4087782bdf (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 29 09:28:26 compute-0 podman[233374]: 2026-01-29 09:28:26.52789886 +0000 UTC m=+0.020700020 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 29 09:28:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:28:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:28:26 compute-0 python3[233273]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 29 09:28:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:28:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:28:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:28:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:28:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v532: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:26 compute-0 sudo[233266]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:27 compute-0 sudo[233559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkvurmvymygyellhwecqaasbhnoduxcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678906.8094068-1180-172547773764393/AnsiballZ_stat.py'
Jan 29 09:28:27 compute-0 sudo[233559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:28:27 compute-0 python3.9[233561]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:28:27 compute-0 sudo[233559]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:28:27 compute-0 ceph-mon[75183]: pgmap v532: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:27 compute-0 sudo[233713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bikswmmgttceduptcxndyuygbizsltqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678907.6456614-1192-279541379943620/AnsiballZ_container_config_data.py'
Jan 29 09:28:27 compute-0 sudo[233713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:28:28 compute-0 python3.9[233715]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 29 09:28:28 compute-0 sudo[233713]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:28 compute-0 sudo[233865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdtahotwjrktcuffqhqlcafkzatzrlfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678908.3550093-1203-179278253002919/AnsiballZ_container_config_hash.py'
Jan 29 09:28:28 compute-0 sudo[233865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:28:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v533: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:28 compute-0 python3.9[233867]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 29 09:28:28 compute-0 sudo[233865]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:29 compute-0 sudo[234017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soyapuwkzojihgpsuqzbilmremwxdmqw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769678909.0187862-1213-14739826076531/AnsiballZ_edpm_container_manage.py'
Jan 29 09:28:29 compute-0 sudo[234017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:28:29 compute-0 python3[234019]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 29 09:28:29 compute-0 podman[234055]: 2026-01-29 09:28:29.699485691 +0000 UTC m=+0.067639266 container create 3dc9f683cc6662d9d6b54fd29a40371e3f294a111333f929368fa860ad08d178 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251202, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible)
Jan 29 09:28:29 compute-0 podman[234055]: 2026-01-29 09:28:29.653076849 +0000 UTC m=+0.021230444 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 29 09:28:29 compute-0 python3[234019]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 29 09:28:29 compute-0 ceph-mon[75183]: pgmap v533: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:29 compute-0 sudo[234017]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:30 compute-0 sudo[234244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxkwwlbauksqrgpceqlkvyseytpjgvar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678909.9791322-1221-248560042446933/AnsiballZ_stat.py'
Jan 29 09:28:30 compute-0 sudo[234244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:28:30 compute-0 python3.9[234246]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:28:30 compute-0 sudo[234244]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v534: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:30 compute-0 sudo[234398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewrvqkdcmyijapdqgbjzmprxdorvjpjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678910.7368603-1230-214807278510199/AnsiballZ_file.py'
Jan 29 09:28:30 compute-0 sudo[234398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:28:31 compute-0 python3.9[234400]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:28:31 compute-0 sudo[234398]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:31 compute-0 sudo[234549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agcpobhqwixrmtoqqpcgtinxernmoxta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678911.2074668-1230-51611395677637/AnsiballZ_copy.py'
Jan 29 09:28:31 compute-0 sudo[234549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:28:31 compute-0 python3.9[234551]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769678911.2074668-1230-51611395677637/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 09:28:31 compute-0 sudo[234549]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:31 compute-0 ceph-mon[75183]: pgmap v534: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:31 compute-0 sudo[234625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kugmqzzmusxjpqtuksfavrijygujsxry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678911.2074668-1230-51611395677637/AnsiballZ_systemd.py'
Jan 29 09:28:31 compute-0 sudo[234625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:28:32 compute-0 python3.9[234627]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 29 09:28:32 compute-0 systemd[1]: Reloading.
Jan 29 09:28:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:28:32 compute-0 systemd-rc-local-generator[234650]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:28:32 compute-0 systemd-sysv-generator[234655]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:28:32 compute-0 sudo[234625]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v535: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:32 compute-0 sudo[234735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciowwqjnficvyatjmubajjhqmuflsjub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678911.2074668-1230-51611395677637/AnsiballZ_systemd.py'
Jan 29 09:28:32 compute-0 sudo[234735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:28:33 compute-0 python3.9[234737]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 09:28:33 compute-0 systemd[1]: Reloading.
Jan 29 09:28:33 compute-0 systemd-sysv-generator[234770]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 09:28:33 compute-0 systemd-rc-local-generator[234765]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 09:28:33 compute-0 systemd[1]: Starting nova_compute container...
Jan 29 09:28:33 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dacb864e577e054f5617c028a0ca2f2cfe9c8ca5b218c00e5954cf2c842a381/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dacb864e577e054f5617c028a0ca2f2cfe9c8ca5b218c00e5954cf2c842a381/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dacb864e577e054f5617c028a0ca2f2cfe9c8ca5b218c00e5954cf2c842a381/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dacb864e577e054f5617c028a0ca2f2cfe9c8ca5b218c00e5954cf2c842a381/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dacb864e577e054f5617c028a0ca2f2cfe9c8ca5b218c00e5954cf2c842a381/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:33 compute-0 podman[234778]: 2026-01-29 09:28:33.545594796 +0000 UTC m=+0.093615825 container init 3dc9f683cc6662d9d6b54fd29a40371e3f294a111333f929368fa860ad08d178 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 29 09:28:33 compute-0 podman[234778]: 2026-01-29 09:28:33.551617789 +0000 UTC m=+0.099638798 container start 3dc9f683cc6662d9d6b54fd29a40371e3f294a111333f929368fa860ad08d178 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 29 09:28:33 compute-0 podman[234778]: nova_compute
Jan 29 09:28:33 compute-0 nova_compute[234793]: + sudo -E kolla_set_configs
Jan 29 09:28:33 compute-0 systemd[1]: Started nova_compute container.
Jan 29 09:28:33 compute-0 sudo[234735]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Validating config file
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Copying service configuration files
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Deleting /etc/ceph
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Creating directory /etc/ceph
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Setting permission for /etc/ceph
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Writing out command to execute
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 29 09:28:33 compute-0 nova_compute[234793]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 29 09:28:33 compute-0 nova_compute[234793]: ++ cat /run_command
Jan 29 09:28:33 compute-0 nova_compute[234793]: + CMD=nova-compute
Jan 29 09:28:33 compute-0 nova_compute[234793]: + ARGS=
Jan 29 09:28:33 compute-0 nova_compute[234793]: + sudo kolla_copy_cacerts
Jan 29 09:28:33 compute-0 nova_compute[234793]: + [[ ! -n '' ]]
Jan 29 09:28:33 compute-0 nova_compute[234793]: + . kolla_extend_start
Jan 29 09:28:33 compute-0 nova_compute[234793]: + echo 'Running command: '\''nova-compute'\'''
Jan 29 09:28:33 compute-0 nova_compute[234793]: Running command: 'nova-compute'
Jan 29 09:28:33 compute-0 nova_compute[234793]: + umask 0022
Jan 29 09:28:33 compute-0 nova_compute[234793]: + exec nova-compute
Jan 29 09:28:33 compute-0 ceph-mon[75183]: pgmap v535: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:34 compute-0 sudo[234955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:28:34 compute-0 sudo[234955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:28:34 compute-0 sudo[234955]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:34 compute-0 python3.9[234954]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:28:34 compute-0 sudo[234980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:28:34 compute-0 sudo[234980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:28:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v536: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:34 compute-0 sudo[234980]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:34 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:28:34 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:28:34 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:28:34 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:28:34 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:28:34 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:28:34 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:28:34 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:28:34 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:28:34 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:28:34 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:28:34 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:28:34 compute-0 sudo[235187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:28:34 compute-0 sudo[235187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:28:34 compute-0 sudo[235187]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:34 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:28:34 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:28:34 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:28:34 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:28:34 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:28:34 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:28:34 compute-0 sudo[235212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:28:34 compute-0 sudo[235212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:28:35 compute-0 python3.9[235186]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:28:35 compute-0 podman[235273]: 2026-01-29 09:28:35.269813051 +0000 UTC m=+0.068143609 container create c7f8a02477c59d402567ea61b48908da8056f83a52bc3d8edbf6866397518cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 29 09:28:35 compute-0 systemd[1]: Started libpod-conmon-c7f8a02477c59d402567ea61b48908da8056f83a52bc3d8edbf6866397518cb7.scope.
Jan 29 09:28:35 compute-0 podman[235273]: 2026-01-29 09:28:35.221805886 +0000 UTC m=+0.020136464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:28:35 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:28:35 compute-0 podman[235273]: 2026-01-29 09:28:35.349013478 +0000 UTC m=+0.147344056 container init c7f8a02477c59d402567ea61b48908da8056f83a52bc3d8edbf6866397518cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_meninsky, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 29 09:28:35 compute-0 podman[235273]: 2026-01-29 09:28:35.356358426 +0000 UTC m=+0.154688984 container start c7f8a02477c59d402567ea61b48908da8056f83a52bc3d8edbf6866397518cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_meninsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 09:28:35 compute-0 upbeat_meninsky[235345]: 167 167
Jan 29 09:28:35 compute-0 systemd[1]: libpod-c7f8a02477c59d402567ea61b48908da8056f83a52bc3d8edbf6866397518cb7.scope: Deactivated successfully.
Jan 29 09:28:35 compute-0 podman[235273]: 2026-01-29 09:28:35.361646798 +0000 UTC m=+0.159977356 container attach c7f8a02477c59d402567ea61b48908da8056f83a52bc3d8edbf6866397518cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_meninsky, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:28:35 compute-0 podman[235273]: 2026-01-29 09:28:35.362108601 +0000 UTC m=+0.160439159 container died c7f8a02477c59d402567ea61b48908da8056f83a52bc3d8edbf6866397518cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 09:28:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-67144e2eb7ad1b678c453b8d899cd791121c5e8857fe25212c8472a9cf7c0725-merged.mount: Deactivated successfully.
Jan 29 09:28:35 compute-0 podman[235273]: 2026-01-29 09:28:35.428310687 +0000 UTC m=+0.226641245 container remove c7f8a02477c59d402567ea61b48908da8056f83a52bc3d8edbf6866397518cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:28:35 compute-0 systemd[1]: libpod-conmon-c7f8a02477c59d402567ea61b48908da8056f83a52bc3d8edbf6866397518cb7.scope: Deactivated successfully.
Jan 29 09:28:35 compute-0 podman[235440]: 2026-01-29 09:28:35.567939254 +0000 UTC m=+0.039790865 container create ebe7f8b721282b2278efd33c38387a888421850057a1bbf7401cfa1dd817b60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_snyder, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:28:35 compute-0 systemd[1]: Started libpod-conmon-ebe7f8b721282b2278efd33c38387a888421850057a1bbf7401cfa1dd817b60e.scope.
Jan 29 09:28:35 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:28:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d3af8d0165b87113e95520c5c89db445e91a98d6926683d0e8e1d82f2a817c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d3af8d0165b87113e95520c5c89db445e91a98d6926683d0e8e1d82f2a817c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d3af8d0165b87113e95520c5c89db445e91a98d6926683d0e8e1d82f2a817c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d3af8d0165b87113e95520c5c89db445e91a98d6926683d0e8e1d82f2a817c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d3af8d0165b87113e95520c5c89db445e91a98d6926683d0e8e1d82f2a817c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:35 compute-0 podman[235440]: 2026-01-29 09:28:35.547645996 +0000 UTC m=+0.019497607 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:28:35 compute-0 podman[235440]: 2026-01-29 09:28:35.646274307 +0000 UTC m=+0.118125948 container init ebe7f8b721282b2278efd33c38387a888421850057a1bbf7401cfa1dd817b60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_snyder, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 29 09:28:35 compute-0 python3.9[235434]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 09:28:35 compute-0 podman[235440]: 2026-01-29 09:28:35.650992604 +0000 UTC m=+0.122844225 container start ebe7f8b721282b2278efd33c38387a888421850057a1bbf7401cfa1dd817b60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_snyder, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:28:35 compute-0 podman[235440]: 2026-01-29 09:28:35.65787292 +0000 UTC m=+0.129724791 container attach ebe7f8b721282b2278efd33c38387a888421850057a1bbf7401cfa1dd817b60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 09:28:35 compute-0 ceph-mon[75183]: pgmap v536: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:36 compute-0 heuristic_snyder[235457]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:28:36 compute-0 heuristic_snyder[235457]: --> All data devices are unavailable
Jan 29 09:28:36 compute-0 systemd[1]: libpod-ebe7f8b721282b2278efd33c38387a888421850057a1bbf7401cfa1dd817b60e.scope: Deactivated successfully.
Jan 29 09:28:36 compute-0 podman[235440]: 2026-01-29 09:28:36.143676445 +0000 UTC m=+0.615528056 container died ebe7f8b721282b2278efd33c38387a888421850057a1bbf7401cfa1dd817b60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 09:28:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1d3af8d0165b87113e95520c5c89db445e91a98d6926683d0e8e1d82f2a817c-merged.mount: Deactivated successfully.
Jan 29 09:28:36 compute-0 podman[235440]: 2026-01-29 09:28:36.224471425 +0000 UTC m=+0.696323036 container remove ebe7f8b721282b2278efd33c38387a888421850057a1bbf7401cfa1dd817b60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_snyder, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 09:28:36 compute-0 systemd[1]: libpod-conmon-ebe7f8b721282b2278efd33c38387a888421850057a1bbf7401cfa1dd817b60e.scope: Deactivated successfully.
Jan 29 09:28:36 compute-0 nova_compute[234793]: 2026-01-29 09:28:36.242 234797 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 29 09:28:36 compute-0 nova_compute[234793]: 2026-01-29 09:28:36.243 234797 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 29 09:28:36 compute-0 nova_compute[234793]: 2026-01-29 09:28:36.243 234797 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 29 09:28:36 compute-0 nova_compute[234793]: 2026-01-29 09:28:36.243 234797 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 29 09:28:36 compute-0 sudo[235639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejgheyjhjiuumecnavjniwmovsdnorxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678915.8385973-1290-96620945859934/AnsiballZ_podman_container.py'
Jan 29 09:28:36 compute-0 sudo[235639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:28:36 compute-0 sudo[235212]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:36 compute-0 sudo[235642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:28:36 compute-0 sudo[235642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:28:36 compute-0 sudo[235642]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:36 compute-0 sudo[235667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:28:36 compute-0 sudo[235667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:28:36 compute-0 nova_compute[234793]: 2026-01-29 09:28:36.476 234797 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:28:36 compute-0 nova_compute[234793]: 2026-01-29 09:28:36.496 234797 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:28:36 compute-0 nova_compute[234793]: 2026-01-29 09:28:36.496 234797 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 29 09:28:36 compute-0 python3.9[235641]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 29 09:28:36 compute-0 sudo[235639]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:36 compute-0 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 09:28:36 compute-0 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 09:28:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v537: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:36 compute-0 podman[235726]: 2026-01-29 09:28:36.673199021 +0000 UTC m=+0.043026562 container create 984281af8c883ec992732cd6c3a9e755b3e64ffe661b3bf6e69a2cb3a147c7f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jennings, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 29 09:28:36 compute-0 systemd[1]: Started libpod-conmon-984281af8c883ec992732cd6c3a9e755b3e64ffe661b3bf6e69a2cb3a147c7f8.scope.
Jan 29 09:28:36 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:28:36 compute-0 podman[235726]: 2026-01-29 09:28:36.653207541 +0000 UTC m=+0.023035112 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:28:36 compute-0 podman[235726]: 2026-01-29 09:28:36.750009563 +0000 UTC m=+0.119837114 container init 984281af8c883ec992732cd6c3a9e755b3e64ffe661b3bf6e69a2cb3a147c7f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:28:36 compute-0 podman[235726]: 2026-01-29 09:28:36.755764658 +0000 UTC m=+0.125592199 container start 984281af8c883ec992732cd6c3a9e755b3e64ffe661b3bf6e69a2cb3a147c7f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jennings, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:28:36 compute-0 podman[235726]: 2026-01-29 09:28:36.761259456 +0000 UTC m=+0.131086987 container attach 984281af8c883ec992732cd6c3a9e755b3e64ffe661b3bf6e69a2cb3a147c7f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jennings, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 29 09:28:36 compute-0 busy_jennings[235767]: 167 167
Jan 29 09:28:36 compute-0 systemd[1]: libpod-984281af8c883ec992732cd6c3a9e755b3e64ffe661b3bf6e69a2cb3a147c7f8.scope: Deactivated successfully.
Jan 29 09:28:36 compute-0 podman[235726]: 2026-01-29 09:28:36.763022734 +0000 UTC m=+0.132850275 container died 984281af8c883ec992732cd6c3a9e755b3e64ffe661b3bf6e69a2cb3a147c7f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jennings, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Jan 29 09:28:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-19a03d19da4820164a42b630e2946cd9a98e898c1588e36e41ed0f270cdc881b-merged.mount: Deactivated successfully.
Jan 29 09:28:36 compute-0 podman[235726]: 2026-01-29 09:28:36.802952051 +0000 UTC m=+0.172779592 container remove 984281af8c883ec992732cd6c3a9e755b3e64ffe661b3bf6e69a2cb3a147c7f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:28:36 compute-0 systemd[1]: libpod-conmon-984281af8c883ec992732cd6c3a9e755b3e64ffe661b3bf6e69a2cb3a147c7f8.scope: Deactivated successfully.
Jan 29 09:28:36 compute-0 podman[235844]: 2026-01-29 09:28:36.927903532 +0000 UTC m=+0.044410829 container create 1a7f700431f53c259db582f5bdc8e48347177a277237d8fd245a488d850d4374 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:28:36 compute-0 systemd[1]: Started libpod-conmon-1a7f700431f53c259db582f5bdc8e48347177a277237d8fd245a488d850d4374.scope.
Jan 29 09:28:36 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:28:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6902ae237a1da30abba1c48a527f937fc8a01bed039a00b219169401d049426e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6902ae237a1da30abba1c48a527f937fc8a01bed039a00b219169401d049426e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6902ae237a1da30abba1c48a527f937fc8a01bed039a00b219169401d049426e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6902ae237a1da30abba1c48a527f937fc8a01bed039a00b219169401d049426e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:37 compute-0 podman[235844]: 2026-01-29 09:28:36.907909532 +0000 UTC m=+0.024416869 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:28:37 compute-0 podman[235844]: 2026-01-29 09:28:37.019454432 +0000 UTC m=+0.135961749 container init 1a7f700431f53c259db582f5bdc8e48347177a277237d8fd245a488d850d4374 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:28:37 compute-0 podman[235844]: 2026-01-29 09:28:37.026199124 +0000 UTC m=+0.142706421 container start 1a7f700431f53c259db582f5bdc8e48347177a277237d8fd245a488d850d4374 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 29 09:28:37 compute-0 podman[235844]: 2026-01-29 09:28:37.029865513 +0000 UTC m=+0.146372830 container attach 1a7f700431f53c259db582f5bdc8e48347177a277237d8fd245a488d850d4374 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lovelace, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:28:37 compute-0 sudo[235939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thxrkawajkcrqunmbchbfcgypymaevxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678916.8043516-1298-182776328906216/AnsiballZ_systemd.py'
Jan 29 09:28:37 compute-0 sudo[235939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.166 234797 INFO nova.virt.driver [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]: {
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:     "0": [
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:         {
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "devices": [
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "/dev/loop3"
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             ],
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "lv_name": "ceph_lv0",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "lv_size": "21470642176",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "name": "ceph_lv0",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "tags": {
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.cluster_name": "ceph",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.crush_device_class": "",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.encrypted": "0",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.objectstore": "bluestore",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.osd_id": "0",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.type": "block",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.vdo": "0",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.with_tpm": "0"
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             },
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "type": "block",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "vg_name": "ceph_vg0"
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:         }
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:     ],
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:     "1": [
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:         {
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "devices": [
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "/dev/loop4"
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             ],
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "lv_name": "ceph_lv1",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "lv_size": "21470642176",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "name": "ceph_lv1",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "tags": {
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.cluster_name": "ceph",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.crush_device_class": "",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.encrypted": "0",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.objectstore": "bluestore",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.osd_id": "1",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.type": "block",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.vdo": "0",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.with_tpm": "0"
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             },
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "type": "block",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "vg_name": "ceph_vg1"
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:         }
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:     ],
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:     "2": [
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:         {
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "devices": [
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "/dev/loop5"
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             ],
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "lv_name": "ceph_lv2",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "lv_size": "21470642176",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "name": "ceph_lv2",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "tags": {
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.cluster_name": "ceph",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.crush_device_class": "",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.encrypted": "0",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.objectstore": "bluestore",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.osd_id": "2",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.type": "block",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.vdo": "0",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:                 "ceph.with_tpm": "0"
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             },
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "type": "block",
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:             "vg_name": "ceph_vg2"
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:         }
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]:     ]
Jan 29 09:28:37 compute-0 mystifying_lovelace[235888]: }
Jan 29 09:28:37 compute-0 systemd[1]: libpod-1a7f700431f53c259db582f5bdc8e48347177a277237d8fd245a488d850d4374.scope: Deactivated successfully.
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.331 234797 INFO nova.compute.provider_config [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 29 09:28:37 compute-0 python3.9[235941]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 09:28:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.347 234797 DEBUG oslo_concurrency.lockutils [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.349 234797 DEBUG oslo_concurrency.lockutils [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.349 234797 DEBUG oslo_concurrency.lockutils [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.349 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.350 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.350 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.350 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.350 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.350 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.350 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.351 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.351 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.351 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.351 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.351 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.351 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.351 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.352 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.352 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.352 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.352 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.352 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.352 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.352 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.353 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.353 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.353 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.353 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.353 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.353 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.354 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.354 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.354 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.354 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.354 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.354 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.355 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.355 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.355 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.355 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.355 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.356 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.356 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.356 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.356 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.356 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.356 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.357 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.357 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.357 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.357 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.357 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.357 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.358 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.358 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.358 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.358 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.358 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.358 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.358 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.359 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.359 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.359 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.359 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.359 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.359 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.359 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.360 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.360 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.360 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.360 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.360 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.360 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.361 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.361 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.361 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.361 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.361 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.361 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.362 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.362 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.362 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.362 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.362 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.362 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.363 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.363 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.363 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.363 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.363 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.363 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.364 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.364 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.364 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.364 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.364 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.364 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.364 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.365 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.365 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.365 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.365 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.365 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.366 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.366 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.366 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.366 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.366 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.367 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.367 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.367 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.367 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.367 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.368 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.368 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.368 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.368 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.368 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.368 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.369 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.369 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.369 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.369 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.369 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.369 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.370 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.370 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.370 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.370 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.370 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 podman[235946]: 2026-01-29 09:28:37.370598794 +0000 UTC m=+0.028145021 container died 1a7f700431f53c259db582f5bdc8e48347177a277237d8fd245a488d850d4374 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lovelace, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.371 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.371 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.371 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.371 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.371 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.371 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.372 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.372 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.372 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.372 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.372 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.373 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.373 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.373 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.373 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.373 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.373 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.374 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.374 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.374 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.374 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.374 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.375 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.375 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.375 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.375 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.375 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.375 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.376 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.376 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.376 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.376 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.376 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.377 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.377 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.377 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.377 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.377 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.378 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.378 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.378 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.378 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.378 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.378 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.379 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.379 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.379 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.379 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.379 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.379 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.380 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.380 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.380 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.380 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.380 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.381 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.381 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.381 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.381 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.381 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.381 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.382 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.382 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.382 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.382 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.382 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.383 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.383 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.383 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.383 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.383 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.383 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.384 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.384 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.384 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.384 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.384 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.384 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.385 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.385 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.385 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.385 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.385 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.385 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.386 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.386 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.386 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.386 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.386 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.387 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.387 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.387 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.387 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.387 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.387 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.388 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.388 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.388 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 systemd[1]: Stopping nova_compute container...
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.388 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.388 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.388 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.389 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.389 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.389 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.389 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.389 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.389 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.390 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.390 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.390 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.390 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.390 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.391 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.391 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.391 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.391 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.391 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.391 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.392 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.392 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.392 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.392 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.392 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.392 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.393 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.393 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.393 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.393 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.393 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.393 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.394 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.394 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.394 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.394 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.394 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.395 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.395 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.395 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.395 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.395 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.395 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.396 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.396 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.396 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.396 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.396 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.397 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.397 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.397 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.397 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.397 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.398 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.398 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.398 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.398 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.398 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.398 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.399 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.399 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-6902ae237a1da30abba1c48a527f937fc8a01bed039a00b219169401d049426e-merged.mount: Deactivated successfully.
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.399 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.399 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.399 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.400 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.400 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.400 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.400 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.400 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.400 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.401 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.401 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.401 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.401 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.401 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.401 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.402 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.402 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.402 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.402 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.402 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.402 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.403 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.403 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.403 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.403 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.403 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.403 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.404 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.404 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.404 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.404 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.404 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.405 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.405 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.405 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.405 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.405 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.405 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.406 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.406 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.406 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.406 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.406 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.406 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.407 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.407 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.407 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.407 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.407 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.407 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.408 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.408 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.408 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.408 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.408 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.408 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.409 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.409 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.409 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.409 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.409 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.409 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.409 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.410 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.410 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.410 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.410 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.410 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.410 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.411 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.411 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.411 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.411 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.411 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.412 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.412 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.412 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.412 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.412 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.412 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.413 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.413 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.413 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.413 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.413 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.413 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 podman[235946]: 2026-01-29 09:28:37.413780679 +0000 UTC m=+0.071326886 container remove 1a7f700431f53c259db582f5bdc8e48347177a277237d8fd245a488d850d4374 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lovelace, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.413 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.413 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.414 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.414 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.414 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.414 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.414 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.414 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.414 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.415 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.415 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.415 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.415 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.415 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.415 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.416 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.416 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.416 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.416 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.416 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.416 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.417 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.417 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.417 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.417 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.417 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.417 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.417 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.418 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.418 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.418 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.418 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.418 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.418 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.418 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.419 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.419 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.419 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 systemd[1]: libpod-conmon-1a7f700431f53c259db582f5bdc8e48347177a277237d8fd245a488d850d4374.scope: Deactivated successfully.
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.419 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.419 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.420 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.420 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.420 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.420 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.420 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.421 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.421 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.421 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.421 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.421 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.421 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.422 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.422 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.422 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.422 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.422 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.422 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.423 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.423 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.423 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.423 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.423 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.423 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.424 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.424 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.424 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.424 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.424 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.424 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.425 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.425 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.425 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.425 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.425 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.426 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.426 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.426 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.426 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.426 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.426 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.427 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.427 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.427 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.427 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.427 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.427 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.428 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.428 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.428 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.428 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.428 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.428 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.428 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.429 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.429 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.429 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.429 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.429 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.429 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.430 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.430 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.430 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.430 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.430 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.430 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.430 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.431 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.431 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.431 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.431 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.431 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.432 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.432 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.432 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.432 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.432 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.432 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.432 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.433 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.433 234797 WARNING oslo_config.cfg [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 29 09:28:37 compute-0 nova_compute[234793]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 29 09:28:37 compute-0 nova_compute[234793]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 29 09:28:37 compute-0 nova_compute[234793]: and ``live_migration_inbound_addr`` respectively.
Jan 29 09:28:37 compute-0 nova_compute[234793]: ).  Its value may be silently ignored in the future.
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.433 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.433 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.433 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.434 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.434 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.434 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.434 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.434 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.434 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.435 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.435 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.435 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.435 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.435 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.435 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.436 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.436 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.436 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.436 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.rbd_secret_uuid        = 3fdce3ca-565d-5459-88e8-1ffe58b48437 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.436 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.436 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.437 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.437 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.437 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.437 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.437 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.437 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.437 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.438 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.438 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.438 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.438 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.438 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.438 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.438 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.439 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.439 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.439 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.439 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.439 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.439 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.440 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.440 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.440 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.440 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.441 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.441 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.441 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.442 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.442 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.442 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.442 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.442 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.442 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.442 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.443 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.443 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.443 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.443 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.443 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.443 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.443 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.444 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.444 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.444 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.444 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.444 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.444 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.444 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.445 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.445 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.445 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.445 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.445 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.445 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.445 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.446 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.446 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.446 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.446 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.446 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.446 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.447 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.447 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.447 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.447 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.447 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.447 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.448 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.448 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.448 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.448 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.448 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.448 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.448 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.449 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.449 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.449 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.449 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.449 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.449 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.449 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.449 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.450 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.450 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.450 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.450 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.450 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.450 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.451 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.451 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.451 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.451 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.451 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.451 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.451 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.452 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.452 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.452 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.452 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.452 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.452 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.452 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.453 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.453 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.453 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.453 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.453 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.453 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.453 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.454 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.454 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.454 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.454 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.454 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.454 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.454 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.455 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.455 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.455 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.455 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.455 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.455 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.456 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.456 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.456 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.456 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.456 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.456 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.457 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.457 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.457 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.457 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.457 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.457 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.457 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.458 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.458 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.458 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.458 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.458 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.458 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.458 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.459 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.459 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.459 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.459 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.459 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.459 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.459 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.460 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.460 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.460 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.460 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.460 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.460 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.460 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.460 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.461 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.461 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.461 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.461 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.461 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.461 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.462 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.462 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.462 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.462 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.462 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.462 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.462 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.463 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.463 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.463 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.463 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.463 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.463 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.464 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.464 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.464 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.464 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 sudo[235667]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.464 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.464 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.465 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.465 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.465 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.465 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.465 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.465 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.466 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.466 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.466 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.466 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.466 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.466 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.466 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.467 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.467 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.467 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.467 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.467 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.467 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.468 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.468 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.468 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.468 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.468 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.468 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.468 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.469 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.469 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.469 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.469 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.469 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.469 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.469 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.470 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.470 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.470 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.470 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.470 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.470 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.470 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.471 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.471 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.471 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.471 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.471 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.472 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.472 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.472 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.472 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.472 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.473 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.473 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.473 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.473 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.473 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.473 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.473 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.473 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.474 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.474 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.474 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.474 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.474 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.474 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.475 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.475 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.475 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.475 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.475 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.475 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.475 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.476 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.476 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.476 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.476 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.476 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.476 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.476 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.477 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.477 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.477 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.477 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.477 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.477 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.478 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.478 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.478 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.478 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.478 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.478 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.479 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.479 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.479 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.479 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.479 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.480 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.480 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.480 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.480 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.480 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.481 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.481 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.481 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.481 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.481 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.481 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.482 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.482 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.482 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.482 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.482 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.483 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.483 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.483 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.483 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.483 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.483 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.484 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.484 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.484 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.484 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.484 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.485 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.485 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.485 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.485 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.485 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.485 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.486 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.486 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.486 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.486 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.486 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.487 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.487 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.487 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.487 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.487 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.488 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.488 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.488 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.488 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.488 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.489 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.489 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.489 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.489 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.489 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.489 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.490 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.490 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.490 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.490 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.490 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.491 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.491 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.491 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.491 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.491 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.491 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.492 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.492 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.492 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.492 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.492 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.493 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.493 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.493 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.493 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.493 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.493 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.494 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.494 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.494 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.494 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.494 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.495 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.495 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.495 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.495 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.495 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.496 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.496 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.496 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.496 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.496 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.496 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.497 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.497 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.497 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.497 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.497 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.498 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.498 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.498 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.498 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.498 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.498 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.499 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.499 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.499 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.499 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.499 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.500 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.500 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.500 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.500 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.500 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.501 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.501 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.501 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.501 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.501 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.501 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.502 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.502 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.502 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.502 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.502 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.502 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.503 234797 DEBUG oslo_service.service [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.504 234797 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.507 234797 DEBUG oslo_concurrency.lockutils [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.508 234797 DEBUG oslo_concurrency.lockutils [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 29 09:28:37 compute-0 nova_compute[234793]: 2026-01-29 09:28:37.508 234797 DEBUG oslo_concurrency.lockutils [None req-a9d0590e-3e2e-487e-8c93-fc736db69447 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 29 09:28:37 compute-0 sudo[235980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:28:37 compute-0 sudo[235980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:28:37 compute-0 sudo[235980]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:37 compute-0 sudo[236005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:28:37 compute-0 sudo[236005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:28:37 compute-0 podman[236042]: 2026-01-29 09:28:37.849335169 +0000 UTC m=+0.038832149 container create 958457a3ab11d44ccc9e402252dab3564df4363b185269f0dbf64259daa58c9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 29 09:28:37 compute-0 systemd[1]: Started libpod-conmon-958457a3ab11d44ccc9e402252dab3564df4363b185269f0dbf64259daa58c9d.scope.
Jan 29 09:28:37 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:28:37 compute-0 podman[236042]: 2026-01-29 09:28:37.831005354 +0000 UTC m=+0.020502344 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:28:37 compute-0 podman[236042]: 2026-01-29 09:28:37.930343464 +0000 UTC m=+0.119840424 container init 958457a3ab11d44ccc9e402252dab3564df4363b185269f0dbf64259daa58c9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 09:28:37 compute-0 podman[236042]: 2026-01-29 09:28:37.938467623 +0000 UTC m=+0.127964583 container start 958457a3ab11d44ccc9e402252dab3564df4363b185269f0dbf64259daa58c9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_engelbart, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 09:28:37 compute-0 podman[236042]: 2026-01-29 09:28:37.944225248 +0000 UTC m=+0.133722228 container attach 958457a3ab11d44ccc9e402252dab3564df4363b185269f0dbf64259daa58c9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_engelbart, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 09:28:37 compute-0 systemd[1]: libpod-958457a3ab11d44ccc9e402252dab3564df4363b185269f0dbf64259daa58c9d.scope: Deactivated successfully.
Jan 29 09:28:37 compute-0 vigorous_engelbart[236059]: 167 167
Jan 29 09:28:37 compute-0 podman[236042]: 2026-01-29 09:28:37.945235646 +0000 UTC m=+0.134732626 container died 958457a3ab11d44ccc9e402252dab3564df4363b185269f0dbf64259daa58c9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 29 09:28:38 compute-0 ceph-mon[75183]: pgmap v537: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f51bc519d7d917ac627107e54d6a0154de04b025f086fc4352172d408965dcf-merged.mount: Deactivated successfully.
Jan 29 09:28:38 compute-0 podman[236042]: 2026-01-29 09:28:38.030227629 +0000 UTC m=+0.219724589 container remove 958457a3ab11d44ccc9e402252dab3564df4363b185269f0dbf64259daa58c9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_engelbart, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 29 09:28:38 compute-0 systemd[1]: libpod-conmon-958457a3ab11d44ccc9e402252dab3564df4363b185269f0dbf64259daa58c9d.scope: Deactivated successfully.
Jan 29 09:28:38 compute-0 systemd[1]: libpod-3dc9f683cc6662d9d6b54fd29a40371e3f294a111333f929368fa860ad08d178.scope: Deactivated successfully.
Jan 29 09:28:38 compute-0 systemd[1]: libpod-3dc9f683cc6662d9d6b54fd29a40371e3f294a111333f929368fa860ad08d178.scope: Consumed 3.417s CPU time.
Jan 29 09:28:38 compute-0 conmon[234793]: conmon 3dc9f683cc6662d9d6b5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3dc9f683cc6662d9d6b54fd29a40371e3f294a111333f929368fa860ad08d178.scope/container/memory.events
Jan 29 09:28:38 compute-0 podman[235963]: 2026-01-29 09:28:38.140840523 +0000 UTC m=+0.741426983 container died 3dc9f683cc6662d9d6b54fd29a40371e3f294a111333f929368fa860ad08d178 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 29 09:28:38 compute-0 podman[236083]: 2026-01-29 09:28:38.165881258 +0000 UTC m=+0.053018081 container create 098b86039819c16c40e5c3bef8ea9fea2f0d3ba64797ade9711fe2c41e4442a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 09:28:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3dc9f683cc6662d9d6b54fd29a40371e3f294a111333f929368fa860ad08d178-userdata-shm.mount: Deactivated successfully.
Jan 29 09:28:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dacb864e577e054f5617c028a0ca2f2cfe9c8ca5b218c00e5954cf2c842a381-merged.mount: Deactivated successfully.
Jan 29 09:28:38 compute-0 systemd[1]: Started libpod-conmon-098b86039819c16c40e5c3bef8ea9fea2f0d3ba64797ade9711fe2c41e4442a5.scope.
Jan 29 09:28:38 compute-0 podman[236083]: 2026-01-29 09:28:38.136419543 +0000 UTC m=+0.023556386 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:28:38 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6d04e36fc5626420e0a4dd6331d784589afd3858bd6cfb1eef4faf6151afa90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6d04e36fc5626420e0a4dd6331d784589afd3858bd6cfb1eef4faf6151afa90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6d04e36fc5626420e0a4dd6331d784589afd3858bd6cfb1eef4faf6151afa90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6d04e36fc5626420e0a4dd6331d784589afd3858bd6cfb1eef4faf6151afa90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:38 compute-0 podman[236083]: 2026-01-29 09:28:38.353098759 +0000 UTC m=+0.240235612 container init 098b86039819c16c40e5c3bef8ea9fea2f0d3ba64797ade9711fe2c41e4442a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Jan 29 09:28:38 compute-0 podman[236083]: 2026-01-29 09:28:38.359627555 +0000 UTC m=+0.246764368 container start 098b86039819c16c40e5c3bef8ea9fea2f0d3ba64797ade9711fe2c41e4442a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lederberg, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:28:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v538: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:38 compute-0 podman[236083]: 2026-01-29 09:28:38.806310495 +0000 UTC m=+0.693447318 container attach 098b86039819c16c40e5c3bef8ea9fea2f0d3ba64797ade9711fe2c41e4442a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lederberg, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:28:38 compute-0 lvm[236198]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:28:38 compute-0 lvm[236199]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:28:38 compute-0 lvm[236198]: VG ceph_vg0 finished
Jan 29 09:28:38 compute-0 lvm[236199]: VG ceph_vg1 finished
Jan 29 09:28:39 compute-0 lvm[236201]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:28:39 compute-0 lvm[236201]: VG ceph_vg2 finished
Jan 29 09:28:39 compute-0 podman[235963]: 2026-01-29 09:28:39.002027415 +0000 UTC m=+1.602613865 container cleanup 3dc9f683cc6662d9d6b54fd29a40371e3f294a111333f929368fa860ad08d178 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 29 09:28:39 compute-0 podman[235963]: nova_compute
Jan 29 09:28:39 compute-0 festive_lederberg[236114]: {}
Jan 29 09:28:39 compute-0 podman[236206]: nova_compute
Jan 29 09:28:39 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 29 09:28:39 compute-0 systemd[1]: Stopped nova_compute container.
Jan 29 09:28:39 compute-0 systemd[1]: Starting nova_compute container...
Jan 29 09:28:39 compute-0 podman[236189]: 2026-01-29 09:28:39.088086007 +0000 UTC m=+0.116209906 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 29 09:28:39 compute-0 systemd[1]: libpod-098b86039819c16c40e5c3bef8ea9fea2f0d3ba64797ade9711fe2c41e4442a5.scope: Deactivated successfully.
Jan 29 09:28:39 compute-0 systemd[1]: libpod-098b86039819c16c40e5c3bef8ea9fea2f0d3ba64797ade9711fe2c41e4442a5.scope: Consumed 1.043s CPU time.
Jan 29 09:28:39 compute-0 podman[236083]: 2026-01-29 09:28:39.102595718 +0000 UTC m=+0.989732551 container died 098b86039819c16c40e5c3bef8ea9fea2f0d3ba64797ade9711fe2c41e4442a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lederberg, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:28:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6d04e36fc5626420e0a4dd6331d784589afd3858bd6cfb1eef4faf6151afa90-merged.mount: Deactivated successfully.
Jan 29 09:28:39 compute-0 podman[236083]: 2026-01-29 09:28:39.159087312 +0000 UTC m=+1.046224135 container remove 098b86039819c16c40e5c3bef8ea9fea2f0d3ba64797ade9711fe2c41e4442a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lederberg, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:28:39 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:28:39 compute-0 systemd[1]: libpod-conmon-098b86039819c16c40e5c3bef8ea9fea2f0d3ba64797ade9711fe2c41e4442a5.scope: Deactivated successfully.
Jan 29 09:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dacb864e577e054f5617c028a0ca2f2cfe9c8ca5b218c00e5954cf2c842a381/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dacb864e577e054f5617c028a0ca2f2cfe9c8ca5b218c00e5954cf2c842a381/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dacb864e577e054f5617c028a0ca2f2cfe9c8ca5b218c00e5954cf2c842a381/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dacb864e577e054f5617c028a0ca2f2cfe9c8ca5b218c00e5954cf2c842a381/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dacb864e577e054f5617c028a0ca2f2cfe9c8ca5b218c00e5954cf2c842a381/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:39 compute-0 podman[236228]: 2026-01-29 09:28:39.185343091 +0000 UTC m=+0.097448260 container init 3dc9f683cc6662d9d6b54fd29a40371e3f294a111333f929368fa860ad08d178 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 29 09:28:39 compute-0 podman[236228]: 2026-01-29 09:28:39.1912621 +0000 UTC m=+0.103367239 container start 3dc9f683cc6662d9d6b54fd29a40371e3f294a111333f929368fa860ad08d178 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 09:28:39 compute-0 podman[236228]: nova_compute
Jan 29 09:28:39 compute-0 nova_compute[236255]: + sudo -E kolla_set_configs
Jan 29 09:28:39 compute-0 systemd[1]: Started nova_compute container.
Jan 29 09:28:39 compute-0 sudo[236005]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:28:39 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:28:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:28:39 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:28:39 compute-0 sudo[235939]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Validating config file
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Copying service configuration files
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Deleting /etc/ceph
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Creating directory /etc/ceph
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Setting permission for /etc/ceph
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Writing out command to execute
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 29 09:28:39 compute-0 nova_compute[236255]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 29 09:28:39 compute-0 nova_compute[236255]: ++ cat /run_command
Jan 29 09:28:39 compute-0 sudo[236266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:28:39 compute-0 nova_compute[236255]: + CMD=nova-compute
Jan 29 09:28:39 compute-0 nova_compute[236255]: + ARGS=
Jan 29 09:28:39 compute-0 nova_compute[236255]: + sudo kolla_copy_cacerts
Jan 29 09:28:39 compute-0 sudo[236266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:28:39 compute-0 sudo[236266]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:39 compute-0 nova_compute[236255]: + [[ ! -n '' ]]
Jan 29 09:28:39 compute-0 nova_compute[236255]: + . kolla_extend_start
Jan 29 09:28:39 compute-0 nova_compute[236255]: Running command: 'nova-compute'
Jan 29 09:28:39 compute-0 nova_compute[236255]: + echo 'Running command: '\''nova-compute'\'''
Jan 29 09:28:39 compute-0 nova_compute[236255]: + umask 0022
Jan 29 09:28:39 compute-0 nova_compute[236255]: + exec nova-compute
Jan 29 09:28:39 compute-0 sudo[236444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvoexnegxqaularslllruqzwabbtwvfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769678919.3989818-1307-89144150295563/AnsiballZ_podman_container.py'
Jan 29 09:28:39 compute-0 sudo[236444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:28:39 compute-0 python3.9[236446]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 29 09:28:40 compute-0 systemd[1]: Started libpod-conmon-5f1c91b3e7d0071b4a477a19ad9af9e77d526a726d85ac36e0e2fa4087782bdf.scope.
Jan 29 09:28:40 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:28:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d67f3daedf1fab695693e0693564526e87718a72eed32b9d6952c5d4927ecd33/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d67f3daedf1fab695693e0693564526e87718a72eed32b9d6952c5d4927ecd33/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d67f3daedf1fab695693e0693564526e87718a72eed32b9d6952c5d4927ecd33/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 29 09:28:40 compute-0 podman[236471]: 2026-01-29 09:28:40.223713143 +0000 UTC m=+0.200832359 container init 5f1c91b3e7d0071b4a477a19ad9af9e77d526a726d85ac36e0e2fa4087782bdf (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:28:40 compute-0 podman[236471]: 2026-01-29 09:28:40.229676624 +0000 UTC m=+0.206795840 container start 5f1c91b3e7d0071b4a477a19ad9af9e77d526a726d85ac36e0e2fa4087782bdf (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 29 09:28:40 compute-0 ceph-mon[75183]: pgmap v538: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:40 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:28:40 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:28:40 compute-0 nova_compute_init[236493]: INFO:nova_statedir:Applying nova statedir ownership
Jan 29 09:28:40 compute-0 nova_compute_init[236493]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 29 09:28:40 compute-0 nova_compute_init[236493]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 29 09:28:40 compute-0 nova_compute_init[236493]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 29 09:28:40 compute-0 nova_compute_init[236493]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 29 09:28:40 compute-0 nova_compute_init[236493]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 29 09:28:40 compute-0 nova_compute_init[236493]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 29 09:28:40 compute-0 nova_compute_init[236493]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 29 09:28:40 compute-0 nova_compute_init[236493]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 29 09:28:40 compute-0 nova_compute_init[236493]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 29 09:28:40 compute-0 nova_compute_init[236493]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 29 09:28:40 compute-0 nova_compute_init[236493]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 29 09:28:40 compute-0 nova_compute_init[236493]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 29 09:28:40 compute-0 nova_compute_init[236493]: INFO:nova_statedir:Nova statedir ownership complete
Jan 29 09:28:40 compute-0 systemd[1]: libpod-5f1c91b3e7d0071b4a477a19ad9af9e77d526a726d85ac36e0e2fa4087782bdf.scope: Deactivated successfully.
Jan 29 09:28:40 compute-0 python3.9[236446]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 29 09:28:40 compute-0 podman[236494]: 2026-01-29 09:28:40.325828028 +0000 UTC m=+0.031079340 container died 5f1c91b3e7d0071b4a477a19ad9af9e77d526a726d85ac36e0e2fa4087782bdf (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute_init, io.buildah.version=1.41.3)
Jan 29 09:28:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f1c91b3e7d0071b4a477a19ad9af9e77d526a726d85ac36e0e2fa4087782bdf-userdata-shm.mount: Deactivated successfully.
Jan 29 09:28:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d67f3daedf1fab695693e0693564526e87718a72eed32b9d6952c5d4927ecd33-merged.mount: Deactivated successfully.
Jan 29 09:28:40 compute-0 podman[236494]: 2026-01-29 09:28:40.477294154 +0000 UTC m=+0.182545396 container cleanup 5f1c91b3e7d0071b4a477a19ad9af9e77d526a726d85ac36e0e2fa4087782bdf (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 29 09:28:40 compute-0 systemd[1]: libpod-conmon-5f1c91b3e7d0071b4a477a19ad9af9e77d526a726d85ac36e0e2fa4087782bdf.scope: Deactivated successfully.
Jan 29 09:28:40 compute-0 sudo[236444]: pam_unix(sudo:session): session closed for user root
Jan 29 09:28:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v539: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:41 compute-0 sshd-session[211375]: Connection closed by 192.168.122.30 port 48244
Jan 29 09:28:41 compute-0 sshd-session[211372]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:28:41 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Jan 29 09:28:41 compute-0 systemd[1]: session-49.scope: Consumed 1min 50.056s CPU time.
Jan 29 09:28:41 compute-0 systemd-logind[799]: Session 49 logged out. Waiting for processes to exit.
Jan 29 09:28:41 compute-0 systemd-logind[799]: Removed session 49.
Jan 29 09:28:41 compute-0 nova_compute[236255]: 2026-01-29 09:28:41.373 236262 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 29 09:28:41 compute-0 nova_compute[236255]: 2026-01-29 09:28:41.373 236262 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 29 09:28:41 compute-0 nova_compute[236255]: 2026-01-29 09:28:41.373 236262 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 29 09:28:41 compute-0 nova_compute[236255]: 2026-01-29 09:28:41.373 236262 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 29 09:28:41 compute-0 nova_compute[236255]: 2026-01-29 09:28:41.535 236262 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:28:41 compute-0 nova_compute[236255]: 2026-01-29 09:28:41.545 236262 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:28:41 compute-0 nova_compute[236255]: 2026-01-29 09:28:41.546 236262 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 29 09:28:41 compute-0 nova_compute[236255]: 2026-01-29 09:28:41.985 236262 INFO nova.virt.driver [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.083 236262 INFO nova.compute.provider_config [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.105 236262 DEBUG oslo_concurrency.lockutils [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.106 236262 DEBUG oslo_concurrency.lockutils [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.106 236262 DEBUG oslo_concurrency.lockutils [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.106 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.107 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.107 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.107 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.107 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.107 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.107 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.108 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.108 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.108 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.108 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.108 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.108 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.108 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.109 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.109 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.109 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.109 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.109 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.109 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.109 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.110 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.110 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.110 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.110 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.110 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.110 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.110 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.111 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.111 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.111 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.111 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.111 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.111 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.111 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.112 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.112 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.112 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.112 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.112 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.112 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.113 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.113 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.113 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.113 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.113 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.114 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.114 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.114 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.114 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.114 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.114 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.114 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.115 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.115 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.115 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.115 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.115 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.115 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.115 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.116 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.116 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.116 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.116 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.116 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.116 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.116 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.117 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.117 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.117 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.117 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.117 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.117 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.117 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.118 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.118 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.118 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.118 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.118 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.118 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.118 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.119 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.119 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.119 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.119 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.119 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.119 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.119 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.120 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.120 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.120 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.120 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.120 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.120 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.120 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.120 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.121 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.121 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.121 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.121 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.121 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.121 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.121 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.122 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.122 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.122 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.122 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.122 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.122 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.122 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.123 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.123 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.123 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.123 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.123 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.123 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.123 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.124 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.124 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.124 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.124 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.124 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.124 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.124 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.125 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.125 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.125 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.125 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.125 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.125 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.126 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.126 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.126 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.126 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.126 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.127 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.127 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.127 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.127 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.127 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.127 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.128 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.128 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.128 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.128 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.128 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.128 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.129 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.129 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.129 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.129 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.129 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.129 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.130 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.130 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.130 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.130 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.130 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.130 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.131 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.131 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.131 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.131 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.131 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.131 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.132 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.132 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.132 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.132 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.132 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.133 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.133 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.133 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.133 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.133 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.133 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.134 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.134 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.134 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.134 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.134 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.134 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.135 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.135 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.135 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.135 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.135 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.135 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.135 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.136 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.136 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.136 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.136 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.136 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.136 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.137 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.137 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.137 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.137 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.137 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.137 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.138 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.138 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.138 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.138 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.138 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.138 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.139 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.139 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.139 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.139 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.139 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.139 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.140 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.140 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.140 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.140 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.140 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.140 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.140 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.141 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.141 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.141 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.141 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.141 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.141 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.141 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.141 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.142 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.142 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.142 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.142 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.142 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.142 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.142 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.143 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.143 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.143 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.143 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.143 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.143 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.143 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.144 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.144 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.144 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.144 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.144 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.144 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.144 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.144 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.145 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.145 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.145 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.145 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.145 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.145 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.145 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.146 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.146 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.146 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.146 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.146 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.146 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.146 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.147 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.147 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.147 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.147 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.147 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.147 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.147 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.148 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.148 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.148 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.148 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.148 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.148 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.148 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.148 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.149 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.149 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.149 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.149 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.149 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.149 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.149 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.150 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.150 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.150 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.150 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.151 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.151 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.151 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.151 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.151 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.151 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.151 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.152 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.152 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.152 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.152 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.152 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.152 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.153 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.153 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.153 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.153 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.153 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.153 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.154 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.154 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.154 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.154 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.154 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.154 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.155 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.155 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.155 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.155 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.155 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.155 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.155 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.155 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.156 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.156 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.156 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.156 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.156 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.157 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.157 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.157 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.157 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.157 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.157 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.158 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.158 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.158 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.158 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.158 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.159 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.159 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.159 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.159 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.159 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.159 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.160 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.160 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.160 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.160 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.160 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.161 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.161 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.161 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.161 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.161 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.162 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.162 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.162 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.163 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.163 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.163 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.163 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.163 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.164 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.164 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.164 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.164 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.164 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.164 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.165 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.165 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.165 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.165 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.165 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.165 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.166 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.166 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.166 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.166 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.166 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.166 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.167 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.167 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.167 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.167 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.167 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.168 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.168 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.168 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.168 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.169 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.169 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.169 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.169 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.169 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.170 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.170 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.170 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.170 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.170 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.170 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.171 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.171 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.171 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.171 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.171 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.171 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.171 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.172 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.172 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.172 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.172 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.172 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.172 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.172 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.172 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.173 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.173 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.173 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.173 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.173 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.173 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.174 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.174 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.174 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.174 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.174 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.175 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.175 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.175 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.175 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.175 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.175 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.175 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.176 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.176 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.176 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.176 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.176 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.176 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.177 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.177 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.177 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.177 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.177 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.177 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.178 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.178 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.178 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.178 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.178 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.179 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.179 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.179 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.179 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.179 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.179 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.180 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.180 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.180 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.180 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.180 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.180 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.180 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.181 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.181 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.181 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.181 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.181 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.181 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.182 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.182 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.182 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.182 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.182 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.182 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.182 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.183 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.183 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.183 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.183 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.183 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.183 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.184 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.184 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.184 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.184 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.184 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.184 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.184 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.185 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.185 236262 WARNING oslo_config.cfg [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 29 09:28:42 compute-0 nova_compute[236255]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 29 09:28:42 compute-0 nova_compute[236255]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 29 09:28:42 compute-0 nova_compute[236255]: and ``live_migration_inbound_addr`` respectively.
Jan 29 09:28:42 compute-0 nova_compute[236255]: ).  Its value may be silently ignored in the future.
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.185 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.185 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.186 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.186 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.186 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.186 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.186 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.186 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.186 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.187 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.187 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.187 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.187 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.187 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.187 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.187 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.188 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.188 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.188 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.rbd_secret_uuid        = 3fdce3ca-565d-5459-88e8-1ffe58b48437 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.188 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.188 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.188 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.188 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.189 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.189 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.189 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.189 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.189 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.189 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.190 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.190 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.190 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.190 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.190 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.190 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.191 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.191 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.191 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.191 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.191 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.191 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.191 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.191 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.192 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.192 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.192 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.192 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.192 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.192 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.193 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.193 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.193 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.193 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.193 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.193 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.194 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.194 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.194 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.194 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.194 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.194 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.195 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.195 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.195 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.195 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.195 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.195 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.195 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.196 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.196 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.196 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.196 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.196 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.196 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.196 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.197 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.197 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.197 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.197 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.197 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.197 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.198 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.198 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.198 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.198 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.198 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.199 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.199 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.199 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.199 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.199 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.199 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.200 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.200 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.200 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.200 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.200 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.200 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.201 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.201 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.201 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.201 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.201 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.202 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.202 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.202 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.202 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.202 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.203 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.203 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.203 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.203 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.203 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.203 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.204 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.204 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.204 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.204 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.204 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.205 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.205 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.205 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.205 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.205 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.206 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.206 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.206 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.206 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.207 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.207 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.207 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.207 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.207 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.208 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.208 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.208 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.208 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.208 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.208 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.209 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.209 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.209 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.209 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.210 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.210 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.210 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.210 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.210 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.210 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.211 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.211 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.211 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.211 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.211 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.211 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.211 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.212 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.212 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.212 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.212 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.212 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.212 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.213 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.213 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.213 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.213 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.213 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.213 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.214 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.214 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.214 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.214 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.214 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.214 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.214 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.215 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.215 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.215 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.215 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.215 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.215 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.216 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.216 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.216 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.216 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.216 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.216 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.216 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.217 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.217 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.217 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.217 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.217 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.217 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.217 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.218 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.218 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.218 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.218 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.218 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.219 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.219 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.219 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.219 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.219 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.219 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.220 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.220 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.220 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.220 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.220 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.220 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.221 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.221 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.221 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.221 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.221 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.221 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.222 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.222 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.222 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.222 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.222 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.222 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.222 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.223 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.223 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.223 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.223 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.223 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.223 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.223 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.224 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.224 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.224 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.224 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.224 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.224 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.224 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.225 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.225 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.225 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.225 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.225 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.225 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.225 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.226 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.226 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.226 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.226 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.226 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.226 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.227 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.227 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.227 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.227 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.227 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.227 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.228 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.228 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.228 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.228 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.228 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.228 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.228 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.229 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.229 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.229 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.229 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.229 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.229 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.229 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.230 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.230 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.230 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.230 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.230 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.230 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.230 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.231 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.231 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.231 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.231 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.231 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.231 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.231 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.232 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.232 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.232 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.232 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.232 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.232 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.233 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.233 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.233 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.233 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.233 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.233 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.233 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.234 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.234 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.234 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.234 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.234 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.234 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.234 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.235 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.235 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.235 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.235 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.235 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.235 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.235 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.236 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.236 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.236 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.236 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.236 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.236 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.236 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.237 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.237 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.237 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.237 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.237 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.238 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.238 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.238 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.238 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.238 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.238 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.239 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.239 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.239 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.239 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.239 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.240 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.240 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.240 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.240 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.240 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.240 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.241 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.241 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.241 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.241 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.241 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.241 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.241 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.242 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.242 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.242 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.242 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.242 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.242 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.242 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.242 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.243 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.243 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.243 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.243 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.243 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.243 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.244 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.244 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.244 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.244 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.244 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.245 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.245 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.245 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.245 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.245 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.245 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.246 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.246 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.246 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.246 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.246 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.247 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.247 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.247 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.247 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.247 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.247 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.248 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.248 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.248 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.248 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.248 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.248 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.248 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.249 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.249 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.249 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.249 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.249 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.249 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.249 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.250 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.250 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.250 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.250 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.250 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.250 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.250 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.251 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.251 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.251 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.251 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.251 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.251 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.251 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.252 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.252 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.252 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.252 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.253 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.253 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.253 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.253 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.253 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.253 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.254 236262 DEBUG oslo_service.service [None req-a3aa99fe-a8ee-475f-9f90-21bc4bc42e24 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.255 236262 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.269 236262 DEBUG nova.virt.libvirt.host [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.270 236262 DEBUG nova.virt.libvirt.host [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.270 236262 DEBUG nova.virt.libvirt.host [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.270 236262 DEBUG nova.virt.libvirt.host [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 29 09:28:42 compute-0 ceph-mon[75183]: pgmap v539: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:42 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 29 09:28:42 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.338 236262 DEBUG nova.virt.libvirt.host [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f29f3163250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.341 236262 DEBUG nova.virt.libvirt.host [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f29f3163250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.342 236262 INFO nova.virt.libvirt.driver [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Connection event '1' reason 'None'
Jan 29 09:28:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.359 236262 WARNING nova.virt.libvirt.driver [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 29 09:28:42 compute-0 nova_compute[236255]: 2026-01-29 09:28:42.360 236262 DEBUG nova.virt.libvirt.volume.mount [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 29 09:28:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v540: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:42 compute-0 podman[236618]: 2026-01-29 09:28:42.933996368 +0000 UTC m=+0.106816663 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:28:43 compute-0 nova_compute[236255]: 2026-01-29 09:28:43.156 236262 INFO nova.virt.libvirt.host [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Libvirt host capabilities <capabilities>
Jan 29 09:28:43 compute-0 nova_compute[236255]: 
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <host>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <uuid>ff47f49d-26ab-48e1-aa1a-aeb921932033</uuid>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <cpu>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <arch>x86_64</arch>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model>EPYC-Rome-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <vendor>AMD</vendor>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <microcode version='16777317'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <signature family='23' model='49' stepping='0'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='x2apic'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='tsc-deadline'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='osxsave'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='hypervisor'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='tsc_adjust'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='spec-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='stibp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='arch-capabilities'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='ssbd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='cmp_legacy'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='topoext'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='virt-ssbd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='lbrv'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='tsc-scale'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='vmcb-clean'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='pause-filter'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='pfthreshold'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='svme-addr-chk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='rdctl-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='skip-l1dfl-vmentry'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='mds-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature name='pschange-mc-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <pages unit='KiB' size='4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <pages unit='KiB' size='2048'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <pages unit='KiB' size='1048576'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </cpu>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <power_management>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <suspend_mem/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </power_management>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <iommu support='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <migration_features>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <live/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <uri_transports>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <uri_transport>tcp</uri_transport>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <uri_transport>rdma</uri_transport>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </uri_transports>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </migration_features>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <topology>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <cells num='1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <cell id='0'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:           <memory unit='KiB'>7864296</memory>
Jan 29 09:28:43 compute-0 nova_compute[236255]:           <pages unit='KiB' size='4'>1966074</pages>
Jan 29 09:28:43 compute-0 nova_compute[236255]:           <pages unit='KiB' size='2048'>0</pages>
Jan 29 09:28:43 compute-0 nova_compute[236255]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 29 09:28:43 compute-0 nova_compute[236255]:           <distances>
Jan 29 09:28:43 compute-0 nova_compute[236255]:             <sibling id='0' value='10'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:           </distances>
Jan 29 09:28:43 compute-0 nova_compute[236255]:           <cpus num='8'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:           </cpus>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         </cell>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </cells>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </topology>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <cache>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </cache>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <secmodel>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model>selinux</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <doi>0</doi>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </secmodel>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <secmodel>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model>dac</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <doi>0</doi>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </secmodel>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </host>
Jan 29 09:28:43 compute-0 nova_compute[236255]: 
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <guest>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <os_type>hvm</os_type>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <arch name='i686'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <wordsize>32</wordsize>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <domain type='qemu'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <domain type='kvm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </arch>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <features>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <pae/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <nonpae/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <acpi default='on' toggle='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <apic default='on' toggle='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <cpuselection/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <deviceboot/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <disksnapshot default='on' toggle='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <externalSnapshot/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </features>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </guest>
Jan 29 09:28:43 compute-0 nova_compute[236255]: 
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <guest>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <os_type>hvm</os_type>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <arch name='x86_64'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <wordsize>64</wordsize>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <domain type='qemu'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <domain type='kvm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </arch>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <features>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <acpi default='on' toggle='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <apic default='on' toggle='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <cpuselection/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <deviceboot/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <disksnapshot default='on' toggle='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <externalSnapshot/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </features>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </guest>
Jan 29 09:28:43 compute-0 nova_compute[236255]: 
Jan 29 09:28:43 compute-0 nova_compute[236255]: </capabilities>
Jan 29 09:28:43 compute-0 nova_compute[236255]: 
Jan 29 09:28:43 compute-0 nova_compute[236255]: 2026-01-29 09:28:43.162 236262 DEBUG nova.virt.libvirt.host [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 29 09:28:43 compute-0 nova_compute[236255]: 2026-01-29 09:28:43.180 236262 DEBUG nova.virt.libvirt.host [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 29 09:28:43 compute-0 nova_compute[236255]: <domainCapabilities>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <path>/usr/libexec/qemu-kvm</path>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <domain>kvm</domain>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <arch>i686</arch>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <vcpu max='4096'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <iothreads supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <os supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <enum name='firmware'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <loader supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='type'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>rom</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>pflash</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='readonly'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>yes</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>no</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='secure'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>no</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </loader>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </os>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <cpu>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <mode name='host-passthrough' supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='hostPassthroughMigratable'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>on</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>off</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </mode>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <mode name='maximum' supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='maximumMigratable'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>on</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>off</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </mode>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <mode name='host-model' supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <vendor>AMD</vendor>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='x2apic'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='tsc-deadline'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='hypervisor'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='tsc_adjust'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='spec-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='stibp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='ssbd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='cmp_legacy'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='overflow-recov'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='succor'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='amd-ssbd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='virt-ssbd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='lbrv'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='tsc-scale'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='vmcb-clean'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='flushbyasid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='pause-filter'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='pfthreshold'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='svme-addr-chk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='disable' name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </mode>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <mode name='custom' supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-noTSX'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-v5'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='ClearwaterForest'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ddpd-u'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='intel-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ipred-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='lam'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rrsba-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sha512'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sm3'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sm4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='ClearwaterForest-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ddpd-u'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='intel-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ipred-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='lam'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rrsba-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sha512'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sm3'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sm4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cooperlake'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cooperlake-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cooperlake-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Denverton'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mpx'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Denverton-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mpx'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Denverton-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Denverton-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Dhyana-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Genoa'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='auto-ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Genoa-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='auto-ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Genoa-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='auto-ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fs-gs-base-ns'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='perfmon-v2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Milan'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Milan-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Milan-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Milan-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Rome'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Rome-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Rome-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Rome-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Turin'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='auto-ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vp2intersect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fs-gs-base-ns'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibpb-brtype'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='perfmon-v2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbpb'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='srso-user-kernel-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Turin-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='auto-ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vp2intersect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fs-gs-base-ns'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibpb-brtype'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='perfmon-v2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbpb'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='srso-user-kernel-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-v5'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='GraniteRapids'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='GraniteRapids-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='GraniteRapids-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-128'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-256'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-512'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='GraniteRapids-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-128'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-256'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-512'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-noTSX'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-noTSX'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v5'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v6'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v7'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='IvyBridge'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='IvyBridge-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='IvyBridge-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='IvyBridge-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='KnightsMill'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-4fmaps'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-4vnniw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512er'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512pf'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='KnightsMill-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-4fmaps'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-4vnniw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512er'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512pf'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Opteron_G4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fma4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xop'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Opteron_G4-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fma4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xop'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Opteron_G5'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fma4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tbm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xop'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Opteron_G5-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fma4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tbm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xop'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SapphireRapids'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SapphireRapids-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SapphireRapids-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SapphireRapids-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SapphireRapids-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SierraForest'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SierraForest-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SierraForest-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='intel-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ipred-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='lam'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rrsba-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SierraForest-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='intel-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ipred-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='lam'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rrsba-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-v5'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Snowridge'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='core-capability'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mpx'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='split-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Snowridge-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='core-capability'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mpx'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='split-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Snowridge-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='core-capability'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='split-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Snowridge-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='core-capability'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='split-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Snowridge-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='athlon'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnow'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnowext'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='athlon-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnow'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnowext'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='core2duo'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='core2duo-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='coreduo'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='coreduo-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='n270'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='n270-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='phenom'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnow'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnowext'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='phenom-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnow'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnowext'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </mode>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </cpu>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <memoryBacking supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <enum name='sourceType'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <value>file</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <value>anonymous</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <value>memfd</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </memoryBacking>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <devices>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <disk supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='diskDevice'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>disk</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>cdrom</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>floppy</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>lun</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='bus'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>fdc</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>scsi</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>usb</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>sata</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='model'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio-transitional</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio-non-transitional</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </disk>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <graphics supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='type'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vnc</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>egl-headless</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>dbus</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </graphics>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <video supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='modelType'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vga</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>cirrus</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>none</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>bochs</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>ramfb</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </video>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <hostdev supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='mode'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>subsystem</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='startupPolicy'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>default</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>mandatory</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>requisite</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>optional</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='subsysType'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>usb</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>pci</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>scsi</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='capsType'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='pciBackend'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </hostdev>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <rng supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='model'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio-transitional</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio-non-transitional</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='backendModel'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>random</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>egd</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>builtin</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </rng>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <filesystem supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='driverType'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>path</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>handle</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtiofs</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </filesystem>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <tpm supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='model'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>tpm-tis</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>tpm-crb</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='backendModel'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>emulator</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>external</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='backendVersion'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>2.0</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </tpm>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <redirdev supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='bus'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>usb</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </redirdev>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <channel supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='type'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>pty</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>unix</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </channel>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <crypto supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='model'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='type'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>qemu</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='backendModel'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>builtin</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </crypto>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <interface supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='backendType'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>default</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>passt</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </interface>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <panic supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='model'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>isa</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>hyperv</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </panic>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <console supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='type'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>null</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vc</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>pty</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>dev</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>file</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>pipe</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>stdio</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>udp</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>tcp</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>unix</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>qemu-vdagent</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>dbus</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </console>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </devices>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <features>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <gic supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <vmcoreinfo supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <genid supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <backingStoreInput supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <backup supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <async-teardown supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <s390-pv supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <ps2 supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <tdx supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <sev supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <sgx supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <hyperv supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='features'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>relaxed</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vapic</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>spinlocks</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vpindex</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>runtime</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>synic</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>stimer</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>reset</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vendor_id</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>frequencies</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>reenlightenment</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>tlbflush</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>ipi</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>avic</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>emsr_bitmap</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>xmm_input</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <defaults>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <spinlocks>4095</spinlocks>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <stimer_direct>on</stimer_direct>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <tlbflush_direct>on</tlbflush_direct>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <tlbflush_extended>on</tlbflush_extended>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </defaults>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </hyperv>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <launchSecurity supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </features>
Jan 29 09:28:43 compute-0 nova_compute[236255]: </domainCapabilities>
Jan 29 09:28:43 compute-0 nova_compute[236255]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 29 09:28:43 compute-0 nova_compute[236255]: 2026-01-29 09:28:43.188 236262 DEBUG nova.virt.libvirt.host [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 29 09:28:43 compute-0 nova_compute[236255]: <domainCapabilities>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <path>/usr/libexec/qemu-kvm</path>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <domain>kvm</domain>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <arch>i686</arch>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <vcpu max='240'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <iothreads supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <os supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <enum name='firmware'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <loader supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='type'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>rom</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>pflash</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='readonly'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>yes</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>no</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='secure'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>no</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </loader>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </os>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <cpu>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <mode name='host-passthrough' supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='hostPassthroughMigratable'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>on</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>off</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </mode>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <mode name='maximum' supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='maximumMigratable'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>on</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>off</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </mode>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <mode name='host-model' supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <vendor>AMD</vendor>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='x2apic'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='tsc-deadline'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='hypervisor'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='tsc_adjust'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='spec-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='stibp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='ssbd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='cmp_legacy'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='overflow-recov'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='succor'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='amd-ssbd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='virt-ssbd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='lbrv'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='tsc-scale'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='vmcb-clean'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='flushbyasid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='pause-filter'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='pfthreshold'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='svme-addr-chk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='disable' name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </mode>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <mode name='custom' supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-noTSX'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-v5'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='ClearwaterForest'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ddpd-u'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='intel-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ipred-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='lam'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rrsba-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sha512'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sm3'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sm4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='ClearwaterForest-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ddpd-u'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='intel-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ipred-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='lam'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rrsba-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sha512'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sm3'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sm4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cooperlake'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cooperlake-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cooperlake-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Denverton'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mpx'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Denverton-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mpx'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Denverton-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Denverton-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Dhyana-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Genoa'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='auto-ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Genoa-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='auto-ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Genoa-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='auto-ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fs-gs-base-ns'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='perfmon-v2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Milan'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Milan-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Milan-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Milan-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Rome'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Rome-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Rome-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Rome-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Turin'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='auto-ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vp2intersect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fs-gs-base-ns'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibpb-brtype'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='perfmon-v2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbpb'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='srso-user-kernel-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Turin-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='auto-ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vp2intersect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fs-gs-base-ns'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibpb-brtype'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='perfmon-v2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbpb'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='srso-user-kernel-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-v5'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='GraniteRapids'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='GraniteRapids-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='GraniteRapids-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-128'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-256'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-512'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='GraniteRapids-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-128'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-256'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-512'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-noTSX'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-noTSX'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v5'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v6'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v7'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='IvyBridge'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='IvyBridge-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='IvyBridge-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='IvyBridge-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='KnightsMill'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-4fmaps'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-4vnniw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512er'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512pf'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='KnightsMill-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-4fmaps'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-4vnniw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512er'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512pf'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Opteron_G4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fma4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xop'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Opteron_G4-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fma4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xop'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Opteron_G5'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fma4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tbm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xop'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Opteron_G5-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fma4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tbm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xop'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SapphireRapids'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SapphireRapids-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SapphireRapids-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SapphireRapids-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SapphireRapids-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SierraForest'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SierraForest-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SierraForest-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='intel-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ipred-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='lam'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rrsba-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SierraForest-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='intel-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ipred-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='lam'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rrsba-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-v5'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Snowridge'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='core-capability'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mpx'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='split-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Snowridge-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='core-capability'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mpx'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='split-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Snowridge-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='core-capability'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='split-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Snowridge-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='core-capability'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='split-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Snowridge-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='athlon'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnow'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnowext'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='athlon-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnow'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnowext'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='core2duo'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='core2duo-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='coreduo'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='coreduo-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='n270'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='n270-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='phenom'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnow'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnowext'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='phenom-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnow'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnowext'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </mode>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </cpu>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <memoryBacking supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <enum name='sourceType'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <value>file</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <value>anonymous</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <value>memfd</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </memoryBacking>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <devices>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <disk supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='diskDevice'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>disk</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>cdrom</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>floppy</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>lun</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='bus'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>ide</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>fdc</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>scsi</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>usb</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>sata</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='model'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio-transitional</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio-non-transitional</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </disk>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <graphics supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='type'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vnc</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>egl-headless</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>dbus</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </graphics>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <video supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='modelType'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vga</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>cirrus</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>none</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>bochs</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>ramfb</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </video>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <hostdev supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='mode'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>subsystem</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='startupPolicy'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>default</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>mandatory</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>requisite</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>optional</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='subsysType'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>usb</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>pci</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>scsi</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='capsType'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='pciBackend'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </hostdev>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <rng supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='model'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio-transitional</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio-non-transitional</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='backendModel'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>random</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>egd</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>builtin</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </rng>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <filesystem supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='driverType'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>path</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>handle</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtiofs</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </filesystem>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <tpm supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='model'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>tpm-tis</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>tpm-crb</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='backendModel'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>emulator</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>external</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='backendVersion'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>2.0</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </tpm>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <redirdev supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='bus'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>usb</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </redirdev>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <channel supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='type'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>pty</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>unix</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </channel>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <crypto supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='model'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='type'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>qemu</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='backendModel'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>builtin</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </crypto>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <interface supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='backendType'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>default</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>passt</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </interface>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <panic supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='model'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>isa</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>hyperv</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </panic>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <console supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='type'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>null</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vc</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>pty</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>dev</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>file</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>pipe</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>stdio</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>udp</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>tcp</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>unix</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>qemu-vdagent</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>dbus</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </console>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </devices>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <features>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <gic supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <vmcoreinfo supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <genid supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <backingStoreInput supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <backup supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <async-teardown supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <s390-pv supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <ps2 supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <tdx supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <sev supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <sgx supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <hyperv supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='features'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>relaxed</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vapic</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>spinlocks</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vpindex</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>runtime</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>synic</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>stimer</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>reset</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vendor_id</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>frequencies</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>reenlightenment</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>tlbflush</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>ipi</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>avic</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>emsr_bitmap</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>xmm_input</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <defaults>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <spinlocks>4095</spinlocks>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <stimer_direct>on</stimer_direct>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <tlbflush_direct>on</tlbflush_direct>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <tlbflush_extended>on</tlbflush_extended>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </defaults>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </hyperv>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <launchSecurity supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </features>
Jan 29 09:28:43 compute-0 nova_compute[236255]: </domainCapabilities>
Jan 29 09:28:43 compute-0 nova_compute[236255]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 29 09:28:43 compute-0 nova_compute[236255]: 2026-01-29 09:28:43.242 236262 DEBUG nova.virt.libvirt.host [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 29 09:28:43 compute-0 nova_compute[236255]: 2026-01-29 09:28:43.247 236262 DEBUG nova.virt.libvirt.host [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 29 09:28:43 compute-0 nova_compute[236255]: <domainCapabilities>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <path>/usr/libexec/qemu-kvm</path>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <domain>kvm</domain>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <arch>x86_64</arch>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <vcpu max='4096'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <iothreads supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <os supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <enum name='firmware'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <value>efi</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <loader supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='type'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>rom</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>pflash</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='readonly'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>yes</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>no</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='secure'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>yes</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>no</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </loader>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </os>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <cpu>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <mode name='host-passthrough' supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='hostPassthroughMigratable'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>on</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>off</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </mode>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <mode name='maximum' supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='maximumMigratable'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>on</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>off</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </mode>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <mode name='host-model' supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <vendor>AMD</vendor>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='x2apic'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='tsc-deadline'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='hypervisor'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='tsc_adjust'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='spec-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='stibp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='ssbd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='cmp_legacy'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='overflow-recov'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='succor'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='amd-ssbd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='virt-ssbd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='lbrv'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='tsc-scale'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='vmcb-clean'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='flushbyasid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='pause-filter'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='pfthreshold'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='svme-addr-chk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='disable' name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </mode>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <mode name='custom' supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-noTSX'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-v5'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='ClearwaterForest'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ddpd-u'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='intel-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ipred-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='lam'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rrsba-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sha512'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sm3'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sm4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='ClearwaterForest-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ddpd-u'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='intel-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ipred-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='lam'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rrsba-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sha512'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sm3'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sm4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cooperlake'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cooperlake-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cooperlake-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Denverton'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mpx'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Denverton-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mpx'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Denverton-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Denverton-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Dhyana-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Genoa'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='auto-ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Genoa-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='auto-ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Genoa-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='auto-ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fs-gs-base-ns'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='perfmon-v2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Milan'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Milan-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Milan-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Milan-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Rome'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Rome-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Rome-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Rome-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Turin'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='auto-ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vp2intersect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fs-gs-base-ns'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibpb-brtype'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='perfmon-v2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbpb'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='srso-user-kernel-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Turin-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='auto-ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vp2intersect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fs-gs-base-ns'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibpb-brtype'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='perfmon-v2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbpb'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='srso-user-kernel-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-v5'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='GraniteRapids'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='GraniteRapids-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='GraniteRapids-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-128'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-256'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-512'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='GraniteRapids-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-128'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-256'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-512'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-noTSX'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-noTSX'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v5'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v6'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v7'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='IvyBridge'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='IvyBridge-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='IvyBridge-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='IvyBridge-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='KnightsMill'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-4fmaps'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-4vnniw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512er'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512pf'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='KnightsMill-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-4fmaps'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-4vnniw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512er'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512pf'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Opteron_G4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fma4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xop'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Opteron_G4-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fma4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xop'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Opteron_G5'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fma4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tbm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xop'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Opteron_G5-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fma4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tbm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xop'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SapphireRapids'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SapphireRapids-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SapphireRapids-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SapphireRapids-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SapphireRapids-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SierraForest'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SierraForest-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SierraForest-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='intel-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ipred-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='lam'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rrsba-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SierraForest-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='intel-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ipred-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='lam'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rrsba-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-v5'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Snowridge'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='core-capability'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mpx'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='split-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Snowridge-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='core-capability'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mpx'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='split-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Snowridge-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='core-capability'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='split-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Snowridge-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='core-capability'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='split-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Snowridge-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='athlon'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnow'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnowext'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='athlon-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnow'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnowext'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='core2duo'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='core2duo-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='coreduo'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='coreduo-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='n270'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='n270-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='phenom'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnow'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnowext'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='phenom-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnow'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnowext'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </mode>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </cpu>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <memoryBacking supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <enum name='sourceType'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <value>file</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <value>anonymous</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <value>memfd</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </memoryBacking>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <devices>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <disk supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='diskDevice'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>disk</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>cdrom</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>floppy</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>lun</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='bus'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>fdc</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>scsi</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>usb</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>sata</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='model'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio-transitional</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio-non-transitional</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </disk>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <graphics supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='type'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vnc</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>egl-headless</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>dbus</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </graphics>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <video supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='modelType'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vga</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>cirrus</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>none</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>bochs</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>ramfb</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </video>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <hostdev supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='mode'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>subsystem</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='startupPolicy'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>default</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>mandatory</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>requisite</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>optional</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='subsysType'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>usb</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>pci</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>scsi</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='capsType'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='pciBackend'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </hostdev>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <rng supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='model'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio-transitional</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio-non-transitional</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='backendModel'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>random</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>egd</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>builtin</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </rng>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <filesystem supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='driverType'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>path</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>handle</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtiofs</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </filesystem>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <tpm supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='model'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>tpm-tis</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>tpm-crb</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='backendModel'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>emulator</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>external</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='backendVersion'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>2.0</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </tpm>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <redirdev supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='bus'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>usb</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </redirdev>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <channel supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='type'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>pty</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>unix</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </channel>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <crypto supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='model'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='type'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>qemu</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='backendModel'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>builtin</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </crypto>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <interface supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='backendType'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>default</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>passt</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </interface>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <panic supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='model'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>isa</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>hyperv</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </panic>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <console supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='type'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>null</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vc</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>pty</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>dev</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>file</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>pipe</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>stdio</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>udp</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>tcp</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>unix</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>qemu-vdagent</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>dbus</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </console>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </devices>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <features>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <gic supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <vmcoreinfo supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <genid supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <backingStoreInput supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <backup supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <async-teardown supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <s390-pv supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <ps2 supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <tdx supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <sev supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <sgx supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <hyperv supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='features'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>relaxed</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vapic</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>spinlocks</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vpindex</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>runtime</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>synic</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>stimer</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>reset</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vendor_id</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>frequencies</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>reenlightenment</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>tlbflush</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>ipi</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>avic</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>emsr_bitmap</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>xmm_input</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <defaults>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <spinlocks>4095</spinlocks>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <stimer_direct>on</stimer_direct>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <tlbflush_direct>on</tlbflush_direct>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <tlbflush_extended>on</tlbflush_extended>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </defaults>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </hyperv>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <launchSecurity supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </features>
Jan 29 09:28:43 compute-0 nova_compute[236255]: </domainCapabilities>
Jan 29 09:28:43 compute-0 nova_compute[236255]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 29 09:28:43 compute-0 nova_compute[236255]: 2026-01-29 09:28:43.327 236262 DEBUG nova.virt.libvirt.host [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 29 09:28:43 compute-0 nova_compute[236255]: <domainCapabilities>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <path>/usr/libexec/qemu-kvm</path>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <domain>kvm</domain>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <arch>x86_64</arch>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <vcpu max='240'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <iothreads supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <os supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <enum name='firmware'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <loader supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='type'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>rom</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>pflash</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='readonly'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>yes</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>no</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='secure'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>no</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </loader>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </os>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <cpu>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <mode name='host-passthrough' supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='hostPassthroughMigratable'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>on</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>off</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </mode>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <mode name='maximum' supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='maximumMigratable'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>on</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>off</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </mode>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <mode name='host-model' supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <vendor>AMD</vendor>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='x2apic'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='tsc-deadline'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='hypervisor'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='tsc_adjust'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='spec-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='stibp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='ssbd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='cmp_legacy'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='overflow-recov'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='succor'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='amd-ssbd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='virt-ssbd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='lbrv'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='tsc-scale'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='vmcb-clean'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='flushbyasid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='pause-filter'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='pfthreshold'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='svme-addr-chk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <feature policy='disable' name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </mode>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <mode name='custom' supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-noTSX'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Broadwell-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cascadelake-Server-v5'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='ClearwaterForest'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ddpd-u'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='intel-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ipred-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='lam'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rrsba-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sha512'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sm3'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sm4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='ClearwaterForest-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ddpd-u'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='intel-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ipred-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='lam'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rrsba-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sha512'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sm3'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sm4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cooperlake'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cooperlake-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Cooperlake-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Denverton'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mpx'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Denverton-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mpx'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Denverton-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Denverton-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Dhyana-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Genoa'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='auto-ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Genoa-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='auto-ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Genoa-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='auto-ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fs-gs-base-ns'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='perfmon-v2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Milan'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Milan-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Milan-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Milan-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Rome'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Rome-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Rome-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Rome-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Turin'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='auto-ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vp2intersect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fs-gs-base-ns'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibpb-brtype'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='perfmon-v2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbpb'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='srso-user-kernel-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-Turin-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amd-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='auto-ibrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vp2intersect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fs-gs-base-ns'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibpb-brtype'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='no-nested-data-bp'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='null-sel-clr-base'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='perfmon-v2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbpb'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='srso-user-kernel-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='stibp-always-on'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='EPYC-v5'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='GraniteRapids'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='GraniteRapids-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='GraniteRapids-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-128'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-256'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-512'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='GraniteRapids-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-128'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-256'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx10-512'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='prefetchiti'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-noTSX'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Haswell-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-noTSX'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v5'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v6'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Icelake-Server-v7'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='IvyBridge'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='IvyBridge-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='IvyBridge-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='IvyBridge-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='KnightsMill'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-4fmaps'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-4vnniw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512er'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512pf'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='KnightsMill-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-4fmaps'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-4vnniw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512er'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512pf'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Opteron_G4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fma4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xop'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Opteron_G4-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fma4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xop'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Opteron_G5'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fma4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tbm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xop'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Opteron_G5-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fma4'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tbm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xop'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SapphireRapids'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SapphireRapids-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SapphireRapids-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SapphireRapids-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SapphireRapids-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='amx-tile'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-bf16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-fp16'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512-vpopcntdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bitalg'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vbmi2'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrc'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fzrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='la57'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='taa-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='tsx-ldtrk'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SierraForest'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SierraForest-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SierraForest-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='intel-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ipred-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='lam'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rrsba-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='SierraForest-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ifma'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-ne-convert'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx-vnni-int8'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bhi-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='bus-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cmpccxadd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fbsdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='fsrs'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ibrs-all'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='intel-psfd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ipred-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='lam'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mcdt-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pbrsb-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='psdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rrsba-ctrl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='sbdr-ssdp-no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='serialize'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vaes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='vpclmulqdq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Client-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='hle'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='rtm'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Skylake-Server-v5'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512bw'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512cd'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512dq'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512f'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='avx512vl'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='invpcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pcid'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='pku'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Snowridge'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='core-capability'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mpx'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='split-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Snowridge-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='core-capability'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='mpx'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='split-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Snowridge-v2'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='core-capability'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='split-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Snowridge-v3'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='core-capability'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='split-lock-detect'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='Snowridge-v4'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='cldemote'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='erms'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='gfni'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdir64b'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='movdiri'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='xsaves'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='athlon'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnow'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnowext'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='athlon-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnow'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnowext'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='core2duo'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='core2duo-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='coreduo'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='coreduo-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='n270'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='n270-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='ss'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='phenom'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnow'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnowext'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <blockers model='phenom-v1'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnow'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <feature name='3dnowext'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </blockers>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </mode>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </cpu>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <memoryBacking supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <enum name='sourceType'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <value>file</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <value>anonymous</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <value>memfd</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </memoryBacking>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <devices>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <disk supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='diskDevice'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>disk</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>cdrom</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>floppy</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>lun</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='bus'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>ide</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>fdc</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>scsi</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>usb</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>sata</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='model'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio-transitional</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio-non-transitional</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </disk>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <graphics supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='type'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vnc</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>egl-headless</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>dbus</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </graphics>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <video supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='modelType'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vga</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>cirrus</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>none</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>bochs</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>ramfb</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </video>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <hostdev supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='mode'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>subsystem</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='startupPolicy'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>default</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>mandatory</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>requisite</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>optional</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='subsysType'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>usb</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>pci</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>scsi</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='capsType'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='pciBackend'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </hostdev>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <rng supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='model'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio-transitional</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtio-non-transitional</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='backendModel'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>random</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>egd</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>builtin</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </rng>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <filesystem supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='driverType'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>path</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>handle</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>virtiofs</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </filesystem>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <tpm supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='model'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>tpm-tis</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>tpm-crb</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='backendModel'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>emulator</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>external</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='backendVersion'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>2.0</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </tpm>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <redirdev supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='bus'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>usb</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </redirdev>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <channel supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='type'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>pty</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>unix</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </channel>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <crypto supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='model'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='type'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>qemu</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='backendModel'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>builtin</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </crypto>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <interface supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='backendType'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>default</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>passt</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </interface>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <panic supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='model'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>isa</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>hyperv</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </panic>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <console supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='type'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>null</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vc</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>pty</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>dev</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>file</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>pipe</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>stdio</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>udp</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>tcp</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>unix</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>qemu-vdagent</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>dbus</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </console>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </devices>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   <features>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <gic supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <vmcoreinfo supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <genid supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <backingStoreInput supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <backup supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <async-teardown supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <s390-pv supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <ps2 supported='yes'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <tdx supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <sev supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <sgx supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <hyperv supported='yes'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <enum name='features'>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>relaxed</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vapic</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>spinlocks</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vpindex</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>runtime</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>synic</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>stimer</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>reset</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>vendor_id</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>frequencies</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>reenlightenment</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>tlbflush</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>ipi</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>avic</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>emsr_bitmap</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <value>xmm_input</value>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </enum>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       <defaults>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <spinlocks>4095</spinlocks>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <stimer_direct>on</stimer_direct>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <tlbflush_direct>on</tlbflush_direct>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <tlbflush_extended>on</tlbflush_extended>
Jan 29 09:28:43 compute-0 nova_compute[236255]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 29 09:28:43 compute-0 nova_compute[236255]:       </defaults>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     </hyperv>
Jan 29 09:28:43 compute-0 nova_compute[236255]:     <launchSecurity supported='no'/>
Jan 29 09:28:43 compute-0 nova_compute[236255]:   </features>
Jan 29 09:28:43 compute-0 nova_compute[236255]: </domainCapabilities>
Jan 29 09:28:43 compute-0 nova_compute[236255]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 29 09:28:43 compute-0 nova_compute[236255]: 2026-01-29 09:28:43.398 236262 DEBUG nova.virt.libvirt.host [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 29 09:28:43 compute-0 nova_compute[236255]: 2026-01-29 09:28:43.399 236262 INFO nova.virt.libvirt.host [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Secure Boot support detected
Jan 29 09:28:43 compute-0 nova_compute[236255]: 2026-01-29 09:28:43.401 236262 INFO nova.virt.libvirt.driver [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 29 09:28:43 compute-0 nova_compute[236255]: 2026-01-29 09:28:43.401 236262 INFO nova.virt.libvirt.driver [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 29 09:28:43 compute-0 nova_compute[236255]: 2026-01-29 09:28:43.409 236262 DEBUG nova.virt.libvirt.driver [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 29 09:28:43 compute-0 nova_compute[236255]: 2026-01-29 09:28:43.450 236262 INFO nova.virt.node [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Determined node identity 2689825d-8fa0-473a-adf1-5005faba9bec from /var/lib/nova/compute_id
Jan 29 09:28:43 compute-0 nova_compute[236255]: 2026-01-29 09:28:43.475 236262 WARNING nova.compute.manager [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Compute nodes ['2689825d-8fa0-473a-adf1-5005faba9bec'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 29 09:28:43 compute-0 nova_compute[236255]: 2026-01-29 09:28:43.514 236262 INFO nova.compute.manager [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 29 09:28:43 compute-0 nova_compute[236255]: 2026-01-29 09:28:43.545 236262 WARNING nova.compute.manager [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 29 09:28:43 compute-0 nova_compute[236255]: 2026-01-29 09:28:43.546 236262 DEBUG oslo_concurrency.lockutils [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:28:43 compute-0 nova_compute[236255]: 2026-01-29 09:28:43.546 236262 DEBUG oslo_concurrency.lockutils [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:28:43 compute-0 nova_compute[236255]: 2026-01-29 09:28:43.546 236262 DEBUG oslo_concurrency.lockutils [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:28:43 compute-0 nova_compute[236255]: 2026-01-29 09:28:43.546 236262 DEBUG nova.compute.resource_tracker [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 29 09:28:43 compute-0 nova_compute[236255]: 2026-01-29 09:28:43.547 236262 DEBUG oslo_concurrency.processutils [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:28:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:28:44 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3760245606' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:28:44 compute-0 nova_compute[236255]: 2026-01-29 09:28:44.080 236262 DEBUG oslo_concurrency.processutils [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:28:44 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 29 09:28:44 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 29 09:28:44 compute-0 ceph-mon[75183]: pgmap v540: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:44 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3760245606' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:28:44 compute-0 nova_compute[236255]: 2026-01-29 09:28:44.334 236262 WARNING nova.virt.libvirt.driver [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 29 09:28:44 compute-0 nova_compute[236255]: 2026-01-29 09:28:44.337 236262 DEBUG nova.compute.resource_tracker [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5241MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 29 09:28:44 compute-0 nova_compute[236255]: 2026-01-29 09:28:44.337 236262 DEBUG oslo_concurrency.lockutils [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:28:44 compute-0 nova_compute[236255]: 2026-01-29 09:28:44.337 236262 DEBUG oslo_concurrency.lockutils [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:28:44 compute-0 nova_compute[236255]: 2026-01-29 09:28:44.355 236262 WARNING nova.compute.resource_tracker [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] No compute node record for compute-0.ctlplane.example.com:2689825d-8fa0-473a-adf1-5005faba9bec: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 2689825d-8fa0-473a-adf1-5005faba9bec could not be found.
Jan 29 09:28:44 compute-0 nova_compute[236255]: 2026-01-29 09:28:44.377 236262 INFO nova.compute.resource_tracker [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 2689825d-8fa0-473a-adf1-5005faba9bec
Jan 29 09:28:44 compute-0 nova_compute[236255]: 2026-01-29 09:28:44.441 236262 DEBUG nova.compute.resource_tracker [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 29 09:28:44 compute-0 nova_compute[236255]: 2026-01-29 09:28:44.441 236262 DEBUG nova.compute.resource_tracker [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 29 09:28:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v541: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:45 compute-0 nova_compute[236255]: 2026-01-29 09:28:45.292 236262 INFO nova.scheduler.client.report [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] [req-018effb9-99d8-411f-aeb0-3fa208b38f0b] Created resource provider record via placement API for resource provider with UUID 2689825d-8fa0-473a-adf1-5005faba9bec and name compute-0.ctlplane.example.com.
Jan 29 09:28:45 compute-0 nova_compute[236255]: 2026-01-29 09:28:45.639 236262 DEBUG oslo_concurrency.processutils [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:28:46 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:28:46 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/863775260' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:28:46 compute-0 nova_compute[236255]: 2026-01-29 09:28:46.147 236262 DEBUG oslo_concurrency.processutils [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:28:46 compute-0 nova_compute[236255]: 2026-01-29 09:28:46.153 236262 DEBUG nova.virt.libvirt.host [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 29 09:28:46 compute-0 nova_compute[236255]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 29 09:28:46 compute-0 nova_compute[236255]: 2026-01-29 09:28:46.154 236262 INFO nova.virt.libvirt.host [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] kernel doesn't support AMD SEV
Jan 29 09:28:46 compute-0 nova_compute[236255]: 2026-01-29 09:28:46.156 236262 DEBUG nova.compute.provider_tree [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Updating inventory in ProviderTree for provider 2689825d-8fa0-473a-adf1-5005faba9bec with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 29 09:28:46 compute-0 nova_compute[236255]: 2026-01-29 09:28:46.157 236262 DEBUG nova.virt.libvirt.driver [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 29 09:28:46 compute-0 nova_compute[236255]: 2026-01-29 09:28:46.212 236262 DEBUG nova.scheduler.client.report [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Updated inventory for provider 2689825d-8fa0-473a-adf1-5005faba9bec with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 29 09:28:46 compute-0 nova_compute[236255]: 2026-01-29 09:28:46.212 236262 DEBUG nova.compute.provider_tree [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Updating resource provider 2689825d-8fa0-473a-adf1-5005faba9bec generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 29 09:28:46 compute-0 nova_compute[236255]: 2026-01-29 09:28:46.213 236262 DEBUG nova.compute.provider_tree [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Updating inventory in ProviderTree for provider 2689825d-8fa0-473a-adf1-5005faba9bec with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 29 09:28:46 compute-0 ceph-mon[75183]: pgmap v541: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:46 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/863775260' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:28:46 compute-0 nova_compute[236255]: 2026-01-29 09:28:46.315 236262 DEBUG nova.compute.provider_tree [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Updating resource provider 2689825d-8fa0-473a-adf1-5005faba9bec generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 29 09:28:46 compute-0 nova_compute[236255]: 2026-01-29 09:28:46.353 236262 DEBUG nova.compute.resource_tracker [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 29 09:28:46 compute-0 nova_compute[236255]: 2026-01-29 09:28:46.354 236262 DEBUG oslo_concurrency.lockutils [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:28:46 compute-0 nova_compute[236255]: 2026-01-29 09:28:46.354 236262 DEBUG nova.service [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 29 09:28:46 compute-0 nova_compute[236255]: 2026-01-29 09:28:46.436 236262 DEBUG nova.service [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 29 09:28:46 compute-0 nova_compute[236255]: 2026-01-29 09:28:46.437 236262 DEBUG nova.servicegroup.drivers.db [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 29 09:28:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v542: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:28:48 compute-0 ceph-mon[75183]: pgmap v542: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v543: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:50 compute-0 ceph-mon[75183]: pgmap v543: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v544: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:51 compute-0 ceph-mon[75183]: pgmap v544: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:28:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v545: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:53 compute-0 ceph-mon[75183]: pgmap v545: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v546: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:55 compute-0 ceph-mon[75183]: pgmap v546: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:28:55
Jan 29 09:28:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:28:55 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:28:55 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'volumes', 'backups', '.mgr']
Jan 29 09:28:55 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:28:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:28:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:28:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:28:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:28:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:28:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:28:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:28:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:28:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:28:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:28:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:28:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:28:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:28:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:28:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:28:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:28:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v547: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:28:57 compute-0 ceph-mon[75183]: pgmap v547: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v548: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:28:59 compute-0 ceph-mon[75183]: pgmap v548: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v549: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:29:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:29:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:29:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:29:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:29:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:29:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:29:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:29:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:29:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:29:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:29:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:29:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0578630957479565e-06 of space, bias 4.0, pg target 0.0012694357148975478 quantized to 16 (current 32)
Jan 29 09:29:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:29:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:29:01 compute-0 ceph-mon[75183]: pgmap v549: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:02 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:29:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v550: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:04 compute-0 ceph-mon[75183]: pgmap v550: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v551: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:05 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 09:29:05 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3019090428' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:29:05 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 09:29:05 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3019090428' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:29:05 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 09:29:05 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3218217018' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:29:05 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 09:29:05 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3218217018' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:29:06 compute-0 ceph-mon[75183]: pgmap v551: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:06 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/3019090428' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:29:06 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/3019090428' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:29:06 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/3218217018' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:29:06 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/3218217018' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:29:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 09:29:06 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/144654870' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:29:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 09:29:06 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/144654870' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:29:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v552: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:29:07 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/144654870' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:29:07 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/144654870' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:29:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v553: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:08 compute-0 ceph-mon[75183]: pgmap v552: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:29:09.030 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:29:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:29:09.030 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:29:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:29:09.031 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:29:09 compute-0 ceph-mon[75183]: pgmap v553: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:10 compute-0 podman[236714]: 2026-01-29 09:29:10.120873379 +0000 UTC m=+0.061366025 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 09:29:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v554: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:11 compute-0 ceph-mon[75183]: pgmap v554: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:29:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v555: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:13 compute-0 podman[236740]: 2026-01-29 09:29:13.100093019 +0000 UTC m=+0.044765039 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 29 09:29:13 compute-0 ceph-mon[75183]: pgmap v555: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v556: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:15 compute-0 ceph-mon[75183]: pgmap v556: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v557: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:17 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:29:17.366239) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678957366267, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1022, "num_deletes": 251, "total_data_size": 1018063, "memory_usage": 1036680, "flush_reason": "Manual Compaction"}
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678957373421, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 619820, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11051, "largest_seqno": 12072, "table_properties": {"data_size": 615871, "index_size": 1601, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10026, "raw_average_key_size": 20, "raw_value_size": 607434, "raw_average_value_size": 1214, "num_data_blocks": 73, "num_entries": 500, "num_filter_entries": 500, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769678855, "oldest_key_time": 1769678855, "file_creation_time": 1769678957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 7217 microseconds, and 2003 cpu microseconds.
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:29:17.373456) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 619820 bytes OK
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:29:17.373470) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:29:17.374860) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:29:17.374873) EVENT_LOG_v1 {"time_micros": 1769678957374868, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:29:17.374892) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 1013250, prev total WAL file size 1013250, number of live WAL files 2.
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:29:17.375294) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(605KB)], [29(5750KB)]
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678957375338, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 6508357, "oldest_snapshot_seqno": -1}
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3246 keys, 4780816 bytes, temperature: kUnknown
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678957412270, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 4780816, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4758453, "index_size": 13166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 75335, "raw_average_key_size": 23, "raw_value_size": 4699644, "raw_average_value_size": 1447, "num_data_blocks": 587, "num_entries": 3246, "num_filter_entries": 3246, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677896, "oldest_key_time": 0, "file_creation_time": 1769678957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:29:17.412555) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 4780816 bytes
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:29:17.414265) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.8 rd, 129.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 5.6 +0.0 blob) out(4.6 +0.0 blob), read-write-amplify(18.2) write-amplify(7.7) OK, records in: 3714, records dropped: 468 output_compression: NoCompression
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:29:17.414294) EVENT_LOG_v1 {"time_micros": 1769678957414280, "job": 12, "event": "compaction_finished", "compaction_time_micros": 37018, "compaction_time_cpu_micros": 10995, "output_level": 6, "num_output_files": 1, "total_output_size": 4780816, "num_input_records": 3714, "num_output_records": 3246, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678957414526, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769678957415416, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:29:17.375227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:29:17.415492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:29:17.415498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:29:17.415499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:29:17.415501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:29:17 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:29:17.415502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:29:17 compute-0 ceph-mon[75183]: pgmap v557: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v558: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:20 compute-0 ceph-mon[75183]: pgmap v558: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v559: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:22 compute-0 ceph-mon[75183]: pgmap v559: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:29:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v560: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:24 compute-0 ceph-mon[75183]: pgmap v560: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v561: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:26 compute-0 ceph-mon[75183]: pgmap v561: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:29:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:29:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:29:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:29:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:29:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:29:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v562: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:29:28 compute-0 ceph-mon[75183]: pgmap v562: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v563: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:30 compute-0 ceph-mon[75183]: pgmap v563: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v564: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:32 compute-0 ceph-mon[75183]: pgmap v564: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:29:32 compute-0 nova_compute[236255]: 2026-01-29 09:29:32.439 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:29:32 compute-0 nova_compute[236255]: 2026-01-29 09:29:32.466 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:29:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v565: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:34 compute-0 ceph-mon[75183]: pgmap v565: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v566: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:35 compute-0 ceph-mon[75183]: pgmap v566: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v567: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:29:38 compute-0 ceph-mon[75183]: pgmap v567: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v568: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:39 compute-0 sudo[236759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:29:39 compute-0 sudo[236759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:29:39 compute-0 sudo[236759]: pam_unix(sudo:session): session closed for user root
Jan 29 09:29:39 compute-0 sudo[236784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:29:39 compute-0 sudo[236784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:29:39 compute-0 sudo[236784]: pam_unix(sudo:session): session closed for user root
Jan 29 09:29:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:29:39 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:29:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:29:39 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:29:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:29:39 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:29:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:29:39 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:29:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:29:39 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:29:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:29:39 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:29:40 compute-0 sudo[236840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:29:40 compute-0 sudo[236840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:29:40 compute-0 sudo[236840]: pam_unix(sudo:session): session closed for user root
Jan 29 09:29:40 compute-0 sudo[236865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:29:40 compute-0 sudo[236865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:29:40 compute-0 ceph-mon[75183]: pgmap v568: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:40 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:29:40 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:29:40 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:29:40 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:29:40 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:29:40 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:29:40 compute-0 podman[236901]: 2026-01-29 09:29:40.316703044 +0000 UTC m=+0.044279816 container create 79ff337308656b87016b9c11289e3d62691ecc910fac972084aa487f49b87ece (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lehmann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 09:29:40 compute-0 systemd[1]: Started libpod-conmon-79ff337308656b87016b9c11289e3d62691ecc910fac972084aa487f49b87ece.scope.
Jan 29 09:29:40 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:29:40 compute-0 podman[236901]: 2026-01-29 09:29:40.297811304 +0000 UTC m=+0.025388116 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:29:40 compute-0 podman[236901]: 2026-01-29 09:29:40.399035095 +0000 UTC m=+0.126611947 container init 79ff337308656b87016b9c11289e3d62691ecc910fac972084aa487f49b87ece (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lehmann, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:29:40 compute-0 podman[236901]: 2026-01-29 09:29:40.409487417 +0000 UTC m=+0.137064199 container start 79ff337308656b87016b9c11289e3d62691ecc910fac972084aa487f49b87ece (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lehmann, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:29:40 compute-0 podman[236901]: 2026-01-29 09:29:40.413707881 +0000 UTC m=+0.141284663 container attach 79ff337308656b87016b9c11289e3d62691ecc910fac972084aa487f49b87ece (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lehmann, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 09:29:40 compute-0 systemd[1]: libpod-79ff337308656b87016b9c11289e3d62691ecc910fac972084aa487f49b87ece.scope: Deactivated successfully.
Jan 29 09:29:40 compute-0 tender_lehmann[236918]: 167 167
Jan 29 09:29:40 compute-0 conmon[236918]: conmon 79ff337308656b87016b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-79ff337308656b87016b9c11289e3d62691ecc910fac972084aa487f49b87ece.scope/container/memory.events
Jan 29 09:29:40 compute-0 podman[236901]: 2026-01-29 09:29:40.418548052 +0000 UTC m=+0.146124824 container died 79ff337308656b87016b9c11289e3d62691ecc910fac972084aa487f49b87ece (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:29:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdc289828b2be47821dc3e24aafd22ae8ad5fb6696a6a4c4bcb217fba83cccb9-merged.mount: Deactivated successfully.
Jan 29 09:29:40 compute-0 podman[236915]: 2026-01-29 09:29:40.444562383 +0000 UTC m=+0.094996433 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 09:29:40 compute-0 podman[236901]: 2026-01-29 09:29:40.464576933 +0000 UTC m=+0.192153715 container remove 79ff337308656b87016b9c11289e3d62691ecc910fac972084aa487f49b87ece (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lehmann, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:29:40 compute-0 systemd[1]: libpod-conmon-79ff337308656b87016b9c11289e3d62691ecc910fac972084aa487f49b87ece.scope: Deactivated successfully.
Jan 29 09:29:40 compute-0 podman[236964]: 2026-01-29 09:29:40.613252094 +0000 UTC m=+0.052232060 container create ccc9ee07fb77ed791a41710c2e5ade91d015dcc628e93896537818e4ef371e8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:29:40 compute-0 systemd[1]: Started libpod-conmon-ccc9ee07fb77ed791a41710c2e5ade91d015dcc628e93896537818e4ef371e8d.scope.
Jan 29 09:29:40 compute-0 podman[236964]: 2026-01-29 09:29:40.586887773 +0000 UTC m=+0.025867739 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:29:40 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca49a3aade8bf583d83ab5d9421510247c0ef0476cbeb5fb5a12916a13b6ee8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca49a3aade8bf583d83ab5d9421510247c0ef0476cbeb5fb5a12916a13b6ee8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca49a3aade8bf583d83ab5d9421510247c0ef0476cbeb5fb5a12916a13b6ee8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca49a3aade8bf583d83ab5d9421510247c0ef0476cbeb5fb5a12916a13b6ee8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca49a3aade8bf583d83ab5d9421510247c0ef0476cbeb5fb5a12916a13b6ee8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:29:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v569: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:40 compute-0 podman[236964]: 2026-01-29 09:29:40.71427359 +0000 UTC m=+0.153253516 container init ccc9ee07fb77ed791a41710c2e5ade91d015dcc628e93896537818e4ef371e8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shaw, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:29:40 compute-0 podman[236964]: 2026-01-29 09:29:40.721256598 +0000 UTC m=+0.160236524 container start ccc9ee07fb77ed791a41710c2e5ade91d015dcc628e93896537818e4ef371e8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:29:40 compute-0 podman[236964]: 2026-01-29 09:29:40.728166694 +0000 UTC m=+0.167146630 container attach ccc9ee07fb77ed791a41710c2e5ade91d015dcc628e93896537818e4ef371e8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 09:29:41 compute-0 stoic_shaw[236981]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:29:41 compute-0 stoic_shaw[236981]: --> All data devices are unavailable
Jan 29 09:29:41 compute-0 systemd[1]: libpod-ccc9ee07fb77ed791a41710c2e5ade91d015dcc628e93896537818e4ef371e8d.scope: Deactivated successfully.
Jan 29 09:29:41 compute-0 podman[236964]: 2026-01-29 09:29:41.166007206 +0000 UTC m=+0.604987162 container died ccc9ee07fb77ed791a41710c2e5ade91d015dcc628e93896537818e4ef371e8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:29:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ca49a3aade8bf583d83ab5d9421510247c0ef0476cbeb5fb5a12916a13b6ee8-merged.mount: Deactivated successfully.
Jan 29 09:29:41 compute-0 podman[236964]: 2026-01-29 09:29:41.205922123 +0000 UTC m=+0.644902049 container remove ccc9ee07fb77ed791a41710c2e5ade91d015dcc628e93896537818e4ef371e8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shaw, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 29 09:29:41 compute-0 systemd[1]: libpod-conmon-ccc9ee07fb77ed791a41710c2e5ade91d015dcc628e93896537818e4ef371e8d.scope: Deactivated successfully.
Jan 29 09:29:41 compute-0 sudo[236865]: pam_unix(sudo:session): session closed for user root
Jan 29 09:29:41 compute-0 sudo[237014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:29:41 compute-0 sudo[237014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:29:41 compute-0 sudo[237014]: pam_unix(sudo:session): session closed for user root
Jan 29 09:29:41 compute-0 sudo[237039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:29:41 compute-0 sudo[237039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:29:41 compute-0 nova_compute[236255]: 2026-01-29 09:29:41.558 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:29:41 compute-0 nova_compute[236255]: 2026-01-29 09:29:41.560 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:29:41 compute-0 nova_compute[236255]: 2026-01-29 09:29:41.560 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 29 09:29:41 compute-0 nova_compute[236255]: 2026-01-29 09:29:41.560 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 29 09:29:41 compute-0 nova_compute[236255]: 2026-01-29 09:29:41.626 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 29 09:29:41 compute-0 nova_compute[236255]: 2026-01-29 09:29:41.626 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:29:41 compute-0 nova_compute[236255]: 2026-01-29 09:29:41.626 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:29:41 compute-0 nova_compute[236255]: 2026-01-29 09:29:41.627 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:29:41 compute-0 nova_compute[236255]: 2026-01-29 09:29:41.627 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:29:41 compute-0 nova_compute[236255]: 2026-01-29 09:29:41.627 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:29:41 compute-0 nova_compute[236255]: 2026-01-29 09:29:41.628 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:29:41 compute-0 nova_compute[236255]: 2026-01-29 09:29:41.628 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 29 09:29:41 compute-0 nova_compute[236255]: 2026-01-29 09:29:41.628 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:29:41 compute-0 podman[237076]: 2026-01-29 09:29:41.66292024 +0000 UTC m=+0.036318461 container create b795d7380f8a571d40e5a936bc25f1c8b6af7ba7e1224cb2a5e61db46ad2e938 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 29 09:29:41 compute-0 systemd[1]: Started libpod-conmon-b795d7380f8a571d40e5a936bc25f1c8b6af7ba7e1224cb2a5e61db46ad2e938.scope.
Jan 29 09:29:41 compute-0 nova_compute[236255]: 2026-01-29 09:29:41.696 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:29:41 compute-0 nova_compute[236255]: 2026-01-29 09:29:41.696 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:29:41 compute-0 nova_compute[236255]: 2026-01-29 09:29:41.696 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:29:41 compute-0 nova_compute[236255]: 2026-01-29 09:29:41.697 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 29 09:29:41 compute-0 nova_compute[236255]: 2026-01-29 09:29:41.697 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:29:41 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:29:41 compute-0 podman[237076]: 2026-01-29 09:29:41.732371804 +0000 UTC m=+0.105770045 container init b795d7380f8a571d40e5a936bc25f1c8b6af7ba7e1224cb2a5e61db46ad2e938 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:29:41 compute-0 podman[237076]: 2026-01-29 09:29:41.737713938 +0000 UTC m=+0.111112149 container start b795d7380f8a571d40e5a936bc25f1c8b6af7ba7e1224cb2a5e61db46ad2e938 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 09:29:41 compute-0 priceless_carver[237093]: 167 167
Jan 29 09:29:41 compute-0 systemd[1]: libpod-b795d7380f8a571d40e5a936bc25f1c8b6af7ba7e1224cb2a5e61db46ad2e938.scope: Deactivated successfully.
Jan 29 09:29:41 compute-0 conmon[237093]: conmon b795d7380f8a571d40e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b795d7380f8a571d40e5a936bc25f1c8b6af7ba7e1224cb2a5e61db46ad2e938.scope/container/memory.events
Jan 29 09:29:41 compute-0 podman[237076]: 2026-01-29 09:29:41.64697175 +0000 UTC m=+0.020370001 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:29:41 compute-0 podman[237076]: 2026-01-29 09:29:41.743240577 +0000 UTC m=+0.116638818 container attach b795d7380f8a571d40e5a936bc25f1c8b6af7ba7e1224cb2a5e61db46ad2e938 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_carver, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 29 09:29:41 compute-0 podman[237076]: 2026-01-29 09:29:41.743968127 +0000 UTC m=+0.117366348 container died b795d7380f8a571d40e5a936bc25f1c8b6af7ba7e1224cb2a5e61db46ad2e938 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_carver, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:29:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-fce90f27dd0c507955448585afdc88733f3eac10949199f4c67a7fb888106973-merged.mount: Deactivated successfully.
Jan 29 09:29:41 compute-0 podman[237076]: 2026-01-29 09:29:41.785501277 +0000 UTC m=+0.158899498 container remove b795d7380f8a571d40e5a936bc25f1c8b6af7ba7e1224cb2a5e61db46ad2e938 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_carver, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:29:41 compute-0 systemd[1]: libpod-conmon-b795d7380f8a571d40e5a936bc25f1c8b6af7ba7e1224cb2a5e61db46ad2e938.scope: Deactivated successfully.
Jan 29 09:29:41 compute-0 podman[237137]: 2026-01-29 09:29:41.944785484 +0000 UTC m=+0.047426380 container create 38b42fb1947c79376bd1c7e62cd53fb82c73c388284bb4742f6b30f710597ae8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:29:41 compute-0 systemd[1]: Started libpod-conmon-38b42fb1947c79376bd1c7e62cd53fb82c73c388284bb4742f6b30f710597ae8.scope.
Jan 29 09:29:42 compute-0 podman[237137]: 2026-01-29 09:29:41.926023218 +0000 UTC m=+0.028664154 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:29:42 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b64f2fc4c35e22f93bc087a74b5c4aad77f4b36ba72c551c7ce9c1c016755ee6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b64f2fc4c35e22f93bc087a74b5c4aad77f4b36ba72c551c7ce9c1c016755ee6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b64f2fc4c35e22f93bc087a74b5c4aad77f4b36ba72c551c7ce9c1c016755ee6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b64f2fc4c35e22f93bc087a74b5c4aad77f4b36ba72c551c7ce9c1c016755ee6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:29:42 compute-0 podman[237137]: 2026-01-29 09:29:42.042746647 +0000 UTC m=+0.145387563 container init 38b42fb1947c79376bd1c7e62cd53fb82c73c388284bb4742f6b30f710597ae8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_brown, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:29:42 compute-0 podman[237137]: 2026-01-29 09:29:42.050615699 +0000 UTC m=+0.153256595 container start 38b42fb1947c79376bd1c7e62cd53fb82c73c388284bb4742f6b30f710597ae8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_brown, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 09:29:42 compute-0 podman[237137]: 2026-01-29 09:29:42.054061122 +0000 UTC m=+0.156702038 container attach 38b42fb1947c79376bd1c7e62cd53fb82c73c388284bb4742f6b30f710597ae8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 29 09:29:42 compute-0 ceph-mon[75183]: pgmap v569: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:29:42 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1789535282' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:29:42 compute-0 nova_compute[236255]: 2026-01-29 09:29:42.242 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:29:42 compute-0 vigilant_brown[237154]: {
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:     "0": [
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:         {
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "devices": [
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "/dev/loop3"
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             ],
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "lv_name": "ceph_lv0",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "lv_size": "21470642176",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "name": "ceph_lv0",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "tags": {
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.cluster_name": "ceph",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.crush_device_class": "",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.encrypted": "0",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.objectstore": "bluestore",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.osd_id": "0",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.type": "block",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.vdo": "0",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.with_tpm": "0"
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             },
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "type": "block",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "vg_name": "ceph_vg0"
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:         }
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:     ],
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:     "1": [
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:         {
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "devices": [
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "/dev/loop4"
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             ],
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "lv_name": "ceph_lv1",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "lv_size": "21470642176",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "name": "ceph_lv1",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "tags": {
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.cluster_name": "ceph",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.crush_device_class": "",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.encrypted": "0",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.objectstore": "bluestore",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.osd_id": "1",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.type": "block",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.vdo": "0",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.with_tpm": "0"
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             },
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "type": "block",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "vg_name": "ceph_vg1"
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:         }
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:     ],
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:     "2": [
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:         {
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "devices": [
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "/dev/loop5"
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             ],
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "lv_name": "ceph_lv2",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "lv_size": "21470642176",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "name": "ceph_lv2",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "tags": {
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.cluster_name": "ceph",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.crush_device_class": "",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.encrypted": "0",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.objectstore": "bluestore",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.osd_id": "2",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.type": "block",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.vdo": "0",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:                 "ceph.with_tpm": "0"
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             },
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "type": "block",
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:             "vg_name": "ceph_vg2"
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:         }
Jan 29 09:29:42 compute-0 vigilant_brown[237154]:     ]
Jan 29 09:29:42 compute-0 vigilant_brown[237154]: }
Jan 29 09:29:42 compute-0 systemd[1]: libpod-38b42fb1947c79376bd1c7e62cd53fb82c73c388284bb4742f6b30f710597ae8.scope: Deactivated successfully.
Jan 29 09:29:42 compute-0 podman[237137]: 2026-01-29 09:29:42.358873785 +0000 UTC m=+0.461514671 container died 38b42fb1947c79376bd1c7e62cd53fb82c73c388284bb4742f6b30f710597ae8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_brown, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 29 09:29:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:29:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b64f2fc4c35e22f93bc087a74b5c4aad77f4b36ba72c551c7ce9c1c016755ee6-merged.mount: Deactivated successfully.
Jan 29 09:29:42 compute-0 podman[237137]: 2026-01-29 09:29:42.408066962 +0000 UTC m=+0.510707878 container remove 38b42fb1947c79376bd1c7e62cd53fb82c73c388284bb4742f6b30f710597ae8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 09:29:42 compute-0 systemd[1]: libpod-conmon-38b42fb1947c79376bd1c7e62cd53fb82c73c388284bb4742f6b30f710597ae8.scope: Deactivated successfully.
Jan 29 09:29:42 compute-0 sudo[237039]: pam_unix(sudo:session): session closed for user root
Jan 29 09:29:42 compute-0 sudo[237180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:29:42 compute-0 sudo[237180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:29:42 compute-0 sudo[237180]: pam_unix(sudo:session): session closed for user root
Jan 29 09:29:42 compute-0 sudo[237205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:29:42 compute-0 sudo[237205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:29:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v570: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:42 compute-0 podman[237242]: 2026-01-29 09:29:42.853994742 +0000 UTC m=+0.042233100 container create 2041e97d12f8ad0b27782fd4fd4a00bad77dca20f098de690fd5fa4cc7f19990 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mestorf, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:29:42 compute-0 systemd[1]: Started libpod-conmon-2041e97d12f8ad0b27782fd4fd4a00bad77dca20f098de690fd5fa4cc7f19990.scope.
Jan 29 09:29:42 compute-0 podman[237242]: 2026-01-29 09:29:42.830034156 +0000 UTC m=+0.018272514 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:29:42 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:29:42 compute-0 podman[237242]: 2026-01-29 09:29:42.947866875 +0000 UTC m=+0.136105213 container init 2041e97d12f8ad0b27782fd4fd4a00bad77dca20f098de690fd5fa4cc7f19990 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:29:42 compute-0 podman[237242]: 2026-01-29 09:29:42.95587072 +0000 UTC m=+0.144109068 container start 2041e97d12f8ad0b27782fd4fd4a00bad77dca20f098de690fd5fa4cc7f19990 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mestorf, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:29:42 compute-0 podman[237242]: 2026-01-29 09:29:42.95919815 +0000 UTC m=+0.147436518 container attach 2041e97d12f8ad0b27782fd4fd4a00bad77dca20f098de690fd5fa4cc7f19990 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 29 09:29:42 compute-0 inspiring_mestorf[237258]: 167 167
Jan 29 09:29:42 compute-0 systemd[1]: libpod-2041e97d12f8ad0b27782fd4fd4a00bad77dca20f098de690fd5fa4cc7f19990.scope: Deactivated successfully.
Jan 29 09:29:42 compute-0 podman[237242]: 2026-01-29 09:29:42.962121939 +0000 UTC m=+0.150360267 container died 2041e97d12f8ad0b27782fd4fd4a00bad77dca20f098de690fd5fa4cc7f19990 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 29 09:29:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-207a17fc1d08f5b49de4ed6145344c15a7e7a83ed72574a05762f0d783401133-merged.mount: Deactivated successfully.
Jan 29 09:29:42 compute-0 podman[237242]: 2026-01-29 09:29:42.998078849 +0000 UTC m=+0.186317187 container remove 2041e97d12f8ad0b27782fd4fd4a00bad77dca20f098de690fd5fa4cc7f19990 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 29 09:29:43 compute-0 systemd[1]: libpod-conmon-2041e97d12f8ad0b27782fd4fd4a00bad77dca20f098de690fd5fa4cc7f19990.scope: Deactivated successfully.
Jan 29 09:29:43 compute-0 podman[237282]: 2026-01-29 09:29:43.143619225 +0000 UTC m=+0.048058717 container create ed52700843e043ebbeed8f2c0aac2703ad038debfd0b8212d89e5ab57ff19f29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 29 09:29:43 compute-0 systemd[1]: Started libpod-conmon-ed52700843e043ebbeed8f2c0aac2703ad038debfd0b8212d89e5ab57ff19f29.scope.
Jan 29 09:29:43 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1789535282' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:29:43 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09682a1f5a442ae9cf94201034bd935cec5bce826d26039e4e05616e00e66241/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09682a1f5a442ae9cf94201034bd935cec5bce826d26039e4e05616e00e66241/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09682a1f5a442ae9cf94201034bd935cec5bce826d26039e4e05616e00e66241/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09682a1f5a442ae9cf94201034bd935cec5bce826d26039e4e05616e00e66241/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:29:43 compute-0 podman[237282]: 2026-01-29 09:29:43.118448526 +0000 UTC m=+0.022888038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:29:43 compute-0 podman[237282]: 2026-01-29 09:29:43.222809722 +0000 UTC m=+0.127249244 container init ed52700843e043ebbeed8f2c0aac2703ad038debfd0b8212d89e5ab57ff19f29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_proskuriakova, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 29 09:29:43 compute-0 podman[237282]: 2026-01-29 09:29:43.230078588 +0000 UTC m=+0.134518080 container start ed52700843e043ebbeed8f2c0aac2703ad038debfd0b8212d89e5ab57ff19f29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:29:43 compute-0 podman[237282]: 2026-01-29 09:29:43.233816759 +0000 UTC m=+0.138256251 container attach ed52700843e043ebbeed8f2c0aac2703ad038debfd0b8212d89e5ab57ff19f29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_proskuriakova, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 29 09:29:43 compute-0 podman[237296]: 2026-01-29 09:29:43.275789001 +0000 UTC m=+0.092554078 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 29 09:29:43 compute-0 lvm[237392]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:29:43 compute-0 lvm[237393]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:29:43 compute-0 lvm[237393]: VG ceph_vg1 finished
Jan 29 09:29:43 compute-0 lvm[237392]: VG ceph_vg0 finished
Jan 29 09:29:43 compute-0 lvm[237395]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:29:43 compute-0 lvm[237395]: VG ceph_vg2 finished
Jan 29 09:29:44 compute-0 dazzling_proskuriakova[237299]: {}
Jan 29 09:29:44 compute-0 systemd[1]: libpod-ed52700843e043ebbeed8f2c0aac2703ad038debfd0b8212d89e5ab57ff19f29.scope: Deactivated successfully.
Jan 29 09:29:44 compute-0 systemd[1]: libpod-ed52700843e043ebbeed8f2c0aac2703ad038debfd0b8212d89e5ab57ff19f29.scope: Consumed 1.284s CPU time.
Jan 29 09:29:44 compute-0 podman[237282]: 2026-01-29 09:29:44.095898945 +0000 UTC m=+1.000338467 container died ed52700843e043ebbeed8f2c0aac2703ad038debfd0b8212d89e5ab57ff19f29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_proskuriakova, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 29 09:29:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-09682a1f5a442ae9cf94201034bd935cec5bce826d26039e4e05616e00e66241-merged.mount: Deactivated successfully.
Jan 29 09:29:44 compute-0 podman[237282]: 2026-01-29 09:29:44.141554267 +0000 UTC m=+1.045993769 container remove ed52700843e043ebbeed8f2c0aac2703ad038debfd0b8212d89e5ab57ff19f29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 29 09:29:44 compute-0 systemd[1]: libpod-conmon-ed52700843e043ebbeed8f2c0aac2703ad038debfd0b8212d89e5ab57ff19f29.scope: Deactivated successfully.
Jan 29 09:29:44 compute-0 sudo[237205]: pam_unix(sudo:session): session closed for user root
Jan 29 09:29:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:29:44 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:29:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:29:44 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:29:44 compute-0 ceph-mon[75183]: pgmap v570: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:44 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:29:44 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:29:44 compute-0 sudo[237410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:29:44 compute-0 sudo[237410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:29:44 compute-0 sudo[237410]: pam_unix(sudo:session): session closed for user root
Jan 29 09:29:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v571: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 29 09:29:45 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2750598532' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 29 09:29:45 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14304 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 29 09:29:45 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 29 09:29:45 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 29 09:29:45 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/2750598532' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 29 09:29:45 compute-0 nova_compute[236255]: 2026-01-29 09:29:45.459 236262 WARNING nova.virt.libvirt.driver [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 29 09:29:45 compute-0 nova_compute[236255]: 2026-01-29 09:29:45.462 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5227MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 29 09:29:45 compute-0 nova_compute[236255]: 2026-01-29 09:29:45.462 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:29:45 compute-0 nova_compute[236255]: 2026-01-29 09:29:45.462 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:29:45 compute-0 nova_compute[236255]: 2026-01-29 09:29:45.568 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 29 09:29:45 compute-0 nova_compute[236255]: 2026-01-29 09:29:45.568 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 29 09:29:45 compute-0 nova_compute[236255]: 2026-01-29 09:29:45.596 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:29:46 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:29:46 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3825236949' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:29:46 compute-0 nova_compute[236255]: 2026-01-29 09:29:46.095 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:29:46 compute-0 nova_compute[236255]: 2026-01-29 09:29:46.100 236262 DEBUG nova.compute.provider_tree [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed in ProviderTree for provider: 2689825d-8fa0-473a-adf1-5005faba9bec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 29 09:29:46 compute-0 nova_compute[236255]: 2026-01-29 09:29:46.222 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed for provider 2689825d-8fa0-473a-adf1-5005faba9bec based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 29 09:29:46 compute-0 ceph-mon[75183]: pgmap v571: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:46 compute-0 ceph-mon[75183]: from='client.14304 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 29 09:29:46 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3825236949' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:29:46 compute-0 nova_compute[236255]: 2026-01-29 09:29:46.319 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 29 09:29:46 compute-0 nova_compute[236255]: 2026-01-29 09:29:46.320 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:29:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v572: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:29:48 compute-0 ceph-mon[75183]: pgmap v572: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v573: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:50 compute-0 ceph-mon[75183]: pgmap v573: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v574: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:29:52 compute-0 ceph-mon[75183]: pgmap v574: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v575: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:53 compute-0 ceph-mon[75183]: pgmap v575: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v576: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:55 compute-0 ceph-mon[75183]: pgmap v576: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:55 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:29:55
Jan 29 09:29:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:29:56 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:29:56 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'vms', 'images', 'volumes', 'backups', 'cephfs.cephfs.data']
Jan 29 09:29:56 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:29:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:29:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:29:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:29:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:29:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:29:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:29:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:29:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:29:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:29:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:29:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:29:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:29:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:29:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:29:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:29:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:29:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v577: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:29:57 compute-0 ceph-mon[75183]: pgmap v577: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:29:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v578: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:00 compute-0 ceph-mon[75183]: pgmap v578: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v579: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:01 compute-0 ceph-mon[75183]: pgmap v579: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:30:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:30:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:30:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:30:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:30:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:30:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:30:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:30:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:30:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:30:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:30:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:30:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0578630957479565e-06 of space, bias 4.0, pg target 0.0012694357148975478 quantized to 16 (current 32)
Jan 29 09:30:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:30:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:30:02 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:30:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v580: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:03 compute-0 ceph-mon[75183]: pgmap v580: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v581: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:05 compute-0 ceph-mon[75183]: pgmap v581: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v582: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:30:07 compute-0 ceph-mon[75183]: pgmap v582: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 29 09:30:08 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2766610049' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 29 09:30:08 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14312 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 29 09:30:08 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 29 09:30:08 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 29 09:30:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v583: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:08 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/2766610049' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 29 09:30:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:30:09.031 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:30:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:30:09.031 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:30:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:30:09.031 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:30:09 compute-0 ceph-mon[75183]: from='client.14312 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 29 09:30:09 compute-0 ceph-mon[75183]: pgmap v583: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v584: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:11 compute-0 podman[237457]: 2026-01-29 09:30:11.148874778 +0000 UTC m=+0.088284063 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 29 09:30:11 compute-0 ceph-mon[75183]: pgmap v584: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:30:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v585: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:13 compute-0 ceph-mon[75183]: pgmap v585: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:14 compute-0 podman[237483]: 2026-01-29 09:30:14.119073184 +0000 UTC m=+0.058333355 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 29 09:30:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v586: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:15 compute-0 ceph-mon[75183]: pgmap v586: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v587: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:17 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:30:17 compute-0 ceph-mon[75183]: pgmap v587: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v588: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:19 compute-0 ceph-mon[75183]: pgmap v588: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v589: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:21 compute-0 ceph-mon[75183]: pgmap v589: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:30:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v590: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:23 compute-0 ceph-mon[75183]: pgmap v590: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v591: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:26 compute-0 ceph-mon[75183]: pgmap v591: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:30:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:30:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:30:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:30:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:30:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:30:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v592: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:30:28 compute-0 ceph-mon[75183]: pgmap v592: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v593: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:30 compute-0 ceph-mon[75183]: pgmap v593: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v594: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:32 compute-0 ceph-mon[75183]: pgmap v594: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:30:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v595: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:34 compute-0 ceph-mon[75183]: pgmap v595: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v596: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:36 compute-0 ceph-mon[75183]: pgmap v596: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v597: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:30:38 compute-0 ceph-mon[75183]: pgmap v597: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v598: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:40 compute-0 ceph-mon[75183]: pgmap v598: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v599: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:42 compute-0 ceph-mon[75183]: pgmap v599: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:42 compute-0 podman[237502]: 2026-01-29 09:30:42.154739824 +0000 UTC m=+0.092252964 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 09:30:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:30:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v600: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:44 compute-0 ceph-mon[75183]: pgmap v600: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:44 compute-0 sudo[237528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:30:44 compute-0 sudo[237528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:30:44 compute-0 sudo[237528]: pam_unix(sudo:session): session closed for user root
Jan 29 09:30:44 compute-0 sudo[237560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:30:44 compute-0 sudo[237560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:30:44 compute-0 podman[237552]: 2026-01-29 09:30:44.430122348 +0000 UTC m=+0.054060745 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 29 09:30:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v601: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:44 compute-0 sudo[237560]: pam_unix(sudo:session): session closed for user root
Jan 29 09:30:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:30:44 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:30:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:30:44 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:30:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:30:44 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:30:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:30:44 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:30:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:30:44 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:30:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:30:44 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:30:44 compute-0 sudo[237624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:30:44 compute-0 sudo[237624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:30:44 compute-0 sudo[237624]: pam_unix(sudo:session): session closed for user root
Jan 29 09:30:45 compute-0 sudo[237649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:30:45 compute-0 sudo[237649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:30:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:30:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:30:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:30:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:30:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:30:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:30:45 compute-0 podman[237688]: 2026-01-29 09:30:45.318989966 +0000 UTC m=+0.044934910 container create 3b11cd70c1ce723d3c2a145f913791692904e53f8cac8d665c9f2d7dc9100abe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_robinson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:30:45 compute-0 systemd[1]: Started libpod-conmon-3b11cd70c1ce723d3c2a145f913791692904e53f8cac8d665c9f2d7dc9100abe.scope.
Jan 29 09:30:45 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:30:45 compute-0 podman[237688]: 2026-01-29 09:30:45.297897598 +0000 UTC m=+0.023842542 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:30:45 compute-0 podman[237688]: 2026-01-29 09:30:45.399621646 +0000 UTC m=+0.125566590 container init 3b11cd70c1ce723d3c2a145f913791692904e53f8cac8d665c9f2d7dc9100abe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_robinson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 09:30:45 compute-0 podman[237688]: 2026-01-29 09:30:45.409043849 +0000 UTC m=+0.134988763 container start 3b11cd70c1ce723d3c2a145f913791692904e53f8cac8d665c9f2d7dc9100abe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 09:30:45 compute-0 eloquent_robinson[237704]: 167 167
Jan 29 09:30:45 compute-0 systemd[1]: libpod-3b11cd70c1ce723d3c2a145f913791692904e53f8cac8d665c9f2d7dc9100abe.scope: Deactivated successfully.
Jan 29 09:30:45 compute-0 podman[237688]: 2026-01-29 09:30:45.413424607 +0000 UTC m=+0.139369541 container attach 3b11cd70c1ce723d3c2a145f913791692904e53f8cac8d665c9f2d7dc9100abe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_robinson, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:30:45 compute-0 podman[237688]: 2026-01-29 09:30:45.414310811 +0000 UTC m=+0.140255755 container died 3b11cd70c1ce723d3c2a145f913791692904e53f8cac8d665c9f2d7dc9100abe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_robinson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 29 09:30:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-94b0ff30cf950c9180140ed8f0b9ae1a1206a76995669bf1f47a5470b47a3110-merged.mount: Deactivated successfully.
Jan 29 09:30:45 compute-0 podman[237688]: 2026-01-29 09:30:45.458543891 +0000 UTC m=+0.184488805 container remove 3b11cd70c1ce723d3c2a145f913791692904e53f8cac8d665c9f2d7dc9100abe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_robinson, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:30:45 compute-0 systemd[1]: libpod-conmon-3b11cd70c1ce723d3c2a145f913791692904e53f8cac8d665c9f2d7dc9100abe.scope: Deactivated successfully.
Jan 29 09:30:45 compute-0 podman[237730]: 2026-01-29 09:30:45.595619159 +0000 UTC m=+0.043499811 container create 35c67612b9c627419625947839db7da82cbaf7b14d8af73f61533e6980b1c29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_agnesi, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:30:45 compute-0 systemd[1]: Started libpod-conmon-35c67612b9c627419625947839db7da82cbaf7b14d8af73f61533e6980b1c29d.scope.
Jan 29 09:30:45 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:30:45 compute-0 podman[237730]: 2026-01-29 09:30:45.573506854 +0000 UTC m=+0.021387506 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:30:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97ab6f8f5c835df16009ee56b0179ab3026433876a70b8060398ef9aa73d6895/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:30:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97ab6f8f5c835df16009ee56b0179ab3026433876a70b8060398ef9aa73d6895/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:30:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97ab6f8f5c835df16009ee56b0179ab3026433876a70b8060398ef9aa73d6895/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:30:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97ab6f8f5c835df16009ee56b0179ab3026433876a70b8060398ef9aa73d6895/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:30:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97ab6f8f5c835df16009ee56b0179ab3026433876a70b8060398ef9aa73d6895/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:30:45 compute-0 podman[237730]: 2026-01-29 09:30:45.683395471 +0000 UTC m=+0.131276103 container init 35c67612b9c627419625947839db7da82cbaf7b14d8af73f61533e6980b1c29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 29 09:30:45 compute-0 podman[237730]: 2026-01-29 09:30:45.693423341 +0000 UTC m=+0.141303983 container start 35c67612b9c627419625947839db7da82cbaf7b14d8af73f61533e6980b1c29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 09:30:45 compute-0 podman[237730]: 2026-01-29 09:30:45.698896588 +0000 UTC m=+0.146777220 container attach 35c67612b9c627419625947839db7da82cbaf7b14d8af73f61533e6980b1c29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 09:30:46 compute-0 optimistic_agnesi[237747]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:30:46 compute-0 optimistic_agnesi[237747]: --> All data devices are unavailable
Jan 29 09:30:46 compute-0 systemd[1]: libpod-35c67612b9c627419625947839db7da82cbaf7b14d8af73f61533e6980b1c29d.scope: Deactivated successfully.
Jan 29 09:30:46 compute-0 conmon[237747]: conmon 35c67612b9c627419625 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-35c67612b9c627419625947839db7da82cbaf7b14d8af73f61533e6980b1c29d.scope/container/memory.events
Jan 29 09:30:46 compute-0 podman[237730]: 2026-01-29 09:30:46.141524288 +0000 UTC m=+0.589404920 container died 35c67612b9c627419625947839db7da82cbaf7b14d8af73f61533e6980b1c29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_agnesi, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 29 09:30:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-97ab6f8f5c835df16009ee56b0179ab3026433876a70b8060398ef9aa73d6895-merged.mount: Deactivated successfully.
Jan 29 09:30:46 compute-0 podman[237730]: 2026-01-29 09:30:46.182273454 +0000 UTC m=+0.630154116 container remove 35c67612b9c627419625947839db7da82cbaf7b14d8af73f61533e6980b1c29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 29 09:30:46 compute-0 systemd[1]: libpod-conmon-35c67612b9c627419625947839db7da82cbaf7b14d8af73f61533e6980b1c29d.scope: Deactivated successfully.
Jan 29 09:30:46 compute-0 ceph-mon[75183]: pgmap v601: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:46 compute-0 sudo[237649]: pam_unix(sudo:session): session closed for user root
Jan 29 09:30:46 compute-0 sudo[237778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:30:46 compute-0 sudo[237778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:30:46 compute-0 sudo[237778]: pam_unix(sudo:session): session closed for user root
Jan 29 09:30:46 compute-0 nova_compute[236255]: 2026-01-29 09:30:46.310 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:30:46 compute-0 nova_compute[236255]: 2026-01-29 09:30:46.312 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:30:46 compute-0 sudo[237803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:30:46 compute-0 sudo[237803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:30:46 compute-0 nova_compute[236255]: 2026-01-29 09:30:46.333 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:30:46 compute-0 nova_compute[236255]: 2026-01-29 09:30:46.333 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 29 09:30:46 compute-0 nova_compute[236255]: 2026-01-29 09:30:46.334 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 29 09:30:46 compute-0 nova_compute[236255]: 2026-01-29 09:30:46.346 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 29 09:30:46 compute-0 nova_compute[236255]: 2026-01-29 09:30:46.346 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:30:46 compute-0 nova_compute[236255]: 2026-01-29 09:30:46.347 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:30:46 compute-0 nova_compute[236255]: 2026-01-29 09:30:46.347 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:30:46 compute-0 nova_compute[236255]: 2026-01-29 09:30:46.347 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:30:46 compute-0 nova_compute[236255]: 2026-01-29 09:30:46.347 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:30:46 compute-0 nova_compute[236255]: 2026-01-29 09:30:46.347 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:30:46 compute-0 nova_compute[236255]: 2026-01-29 09:30:46.347 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 29 09:30:46 compute-0 nova_compute[236255]: 2026-01-29 09:30:46.348 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:30:46 compute-0 nova_compute[236255]: 2026-01-29 09:30:46.368 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:30:46 compute-0 nova_compute[236255]: 2026-01-29 09:30:46.368 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:30:46 compute-0 nova_compute[236255]: 2026-01-29 09:30:46.368 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:30:46 compute-0 nova_compute[236255]: 2026-01-29 09:30:46.368 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 29 09:30:46 compute-0 nova_compute[236255]: 2026-01-29 09:30:46.369 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:30:46 compute-0 podman[237861]: 2026-01-29 09:30:46.596042808 +0000 UTC m=+0.042024682 container create 95420d8fbbea617ac7cb345a01db868366fa4ae09c080301c3c9b59329420baf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_tesla, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:30:46 compute-0 systemd[1]: Started libpod-conmon-95420d8fbbea617ac7cb345a01db868366fa4ae09c080301c3c9b59329420baf.scope.
Jan 29 09:30:46 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:30:46 compute-0 podman[237861]: 2026-01-29 09:30:46.573841781 +0000 UTC m=+0.019823675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:30:46 compute-0 podman[237861]: 2026-01-29 09:30:46.679659158 +0000 UTC m=+0.125641052 container init 95420d8fbbea617ac7cb345a01db868366fa4ae09c080301c3c9b59329420baf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_tesla, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:30:46 compute-0 podman[237861]: 2026-01-29 09:30:46.686300577 +0000 UTC m=+0.132282451 container start 95420d8fbbea617ac7cb345a01db868366fa4ae09c080301c3c9b59329420baf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:30:46 compute-0 podman[237861]: 2026-01-29 09:30:46.691012823 +0000 UTC m=+0.136994727 container attach 95420d8fbbea617ac7cb345a01db868366fa4ae09c080301c3c9b59329420baf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_tesla, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:30:46 compute-0 eager_tesla[237877]: 167 167
Jan 29 09:30:46 compute-0 systemd[1]: libpod-95420d8fbbea617ac7cb345a01db868366fa4ae09c080301c3c9b59329420baf.scope: Deactivated successfully.
Jan 29 09:30:46 compute-0 podman[237861]: 2026-01-29 09:30:46.692921315 +0000 UTC m=+0.138903189 container died 95420d8fbbea617ac7cb345a01db868366fa4ae09c080301c3c9b59329420baf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_tesla, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 29 09:30:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8cdbf3841f65308d4128a1accbd0e9fb4fc1195b343ad3487877afbf6a6e6a3-merged.mount: Deactivated successfully.
Jan 29 09:30:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v602: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:46 compute-0 podman[237861]: 2026-01-29 09:30:46.76074233 +0000 UTC m=+0.206724204 container remove 95420d8fbbea617ac7cb345a01db868366fa4ae09c080301c3c9b59329420baf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_tesla, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 09:30:46 compute-0 systemd[1]: libpod-conmon-95420d8fbbea617ac7cb345a01db868366fa4ae09c080301c3c9b59329420baf.scope: Deactivated successfully.
Jan 29 09:30:46 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:30:46 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2808549410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:30:46 compute-0 podman[237899]: 2026-01-29 09:30:46.875918049 +0000 UTC m=+0.039935746 container create 13da33c82ad144db2869fb05bcc2bcf695d8dd44589c8178b10dd94460c97637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Jan 29 09:30:46 compute-0 nova_compute[236255]: 2026-01-29 09:30:46.889 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:30:46 compute-0 systemd[1]: Started libpod-conmon-13da33c82ad144db2869fb05bcc2bcf695d8dd44589c8178b10dd94460c97637.scope.
Jan 29 09:30:46 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b7068c39b671ec3f73cc053c1c68db4c3f4336ca45cf44920d055fa17b83216/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b7068c39b671ec3f73cc053c1c68db4c3f4336ca45cf44920d055fa17b83216/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b7068c39b671ec3f73cc053c1c68db4c3f4336ca45cf44920d055fa17b83216/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b7068c39b671ec3f73cc053c1c68db4c3f4336ca45cf44920d055fa17b83216/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:30:46 compute-0 podman[237899]: 2026-01-29 09:30:46.856774324 +0000 UTC m=+0.020792051 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:30:46 compute-0 podman[237899]: 2026-01-29 09:30:46.968729196 +0000 UTC m=+0.132746913 container init 13da33c82ad144db2869fb05bcc2bcf695d8dd44589c8178b10dd94460c97637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_swanson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:30:46 compute-0 podman[237899]: 2026-01-29 09:30:46.973988638 +0000 UTC m=+0.138006345 container start 13da33c82ad144db2869fb05bcc2bcf695d8dd44589c8178b10dd94460c97637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_swanson, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:30:46 compute-0 podman[237899]: 2026-01-29 09:30:46.97742945 +0000 UTC m=+0.141447147 container attach 13da33c82ad144db2869fb05bcc2bcf695d8dd44589c8178b10dd94460c97637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_swanson, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 29 09:30:47 compute-0 nova_compute[236255]: 2026-01-29 09:30:47.038 236262 WARNING nova.virt.libvirt.driver [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 29 09:30:47 compute-0 nova_compute[236255]: 2026-01-29 09:30:47.038 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5243MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 29 09:30:47 compute-0 nova_compute[236255]: 2026-01-29 09:30:47.039 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:30:47 compute-0 nova_compute[236255]: 2026-01-29 09:30:47.039 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:30:47 compute-0 nova_compute[236255]: 2026-01-29 09:30:47.130 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 29 09:30:47 compute-0 nova_compute[236255]: 2026-01-29 09:30:47.130 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 29 09:30:47 compute-0 nova_compute[236255]: 2026-01-29 09:30:47.144 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:30:47 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2808549410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]: {
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:     "0": [
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:         {
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "devices": [
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "/dev/loop3"
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             ],
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "lv_name": "ceph_lv0",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "lv_size": "21470642176",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "name": "ceph_lv0",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "tags": {
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.cluster_name": "ceph",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.crush_device_class": "",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.encrypted": "0",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.objectstore": "bluestore",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.osd_id": "0",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.type": "block",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.vdo": "0",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.with_tpm": "0"
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             },
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "type": "block",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "vg_name": "ceph_vg0"
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:         }
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:     ],
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:     "1": [
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:         {
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "devices": [
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "/dev/loop4"
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             ],
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "lv_name": "ceph_lv1",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "lv_size": "21470642176",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "name": "ceph_lv1",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "tags": {
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.cluster_name": "ceph",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.crush_device_class": "",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.encrypted": "0",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.objectstore": "bluestore",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.osd_id": "1",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.type": "block",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.vdo": "0",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.with_tpm": "0"
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             },
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "type": "block",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "vg_name": "ceph_vg1"
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:         }
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:     ],
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:     "2": [
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:         {
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "devices": [
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "/dev/loop5"
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             ],
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "lv_name": "ceph_lv2",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "lv_size": "21470642176",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "name": "ceph_lv2",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "tags": {
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.cluster_name": "ceph",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.crush_device_class": "",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.encrypted": "0",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.objectstore": "bluestore",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.osd_id": "2",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.type": "block",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.vdo": "0",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:                 "ceph.with_tpm": "0"
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             },
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "type": "block",
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:             "vg_name": "ceph_vg2"
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:         }
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]:     ]
Jan 29 09:30:47 compute-0 vigorous_swanson[237917]: }
Jan 29 09:30:47 compute-0 systemd[1]: libpod-13da33c82ad144db2869fb05bcc2bcf695d8dd44589c8178b10dd94460c97637.scope: Deactivated successfully.
Jan 29 09:30:47 compute-0 podman[237947]: 2026-01-29 09:30:47.293098394 +0000 UTC m=+0.021128479 container died 13da33c82ad144db2869fb05bcc2bcf695d8dd44589c8178b10dd94460c97637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 29 09:30:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b7068c39b671ec3f73cc053c1c68db4c3f4336ca45cf44920d055fa17b83216-merged.mount: Deactivated successfully.
Jan 29 09:30:47 compute-0 podman[237947]: 2026-01-29 09:30:47.334011695 +0000 UTC m=+0.062041770 container remove 13da33c82ad144db2869fb05bcc2bcf695d8dd44589c8178b10dd94460c97637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_swanson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:30:47 compute-0 systemd[1]: libpod-conmon-13da33c82ad144db2869fb05bcc2bcf695d8dd44589c8178b10dd94460c97637.scope: Deactivated successfully.
Jan 29 09:30:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:30:47 compute-0 sudo[237803]: pam_unix(sudo:session): session closed for user root
Jan 29 09:30:47 compute-0 sudo[237962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:30:47 compute-0 sudo[237962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:30:47 compute-0 sudo[237962]: pam_unix(sudo:session): session closed for user root
Jan 29 09:30:47 compute-0 sudo[237987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:30:47 compute-0 sudo[237987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:30:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:30:47 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2878083451' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:30:47 compute-0 nova_compute[236255]: 2026-01-29 09:30:47.646 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:30:47 compute-0 nova_compute[236255]: 2026-01-29 09:30:47.654 236262 DEBUG nova.compute.provider_tree [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed in ProviderTree for provider: 2689825d-8fa0-473a-adf1-5005faba9bec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 29 09:30:47 compute-0 nova_compute[236255]: 2026-01-29 09:30:47.675 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed for provider 2689825d-8fa0-473a-adf1-5005faba9bec based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 29 09:30:47 compute-0 nova_compute[236255]: 2026-01-29 09:30:47.678 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 29 09:30:47 compute-0 nova_compute[236255]: 2026-01-29 09:30:47.679 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:30:47 compute-0 podman[238026]: 2026-01-29 09:30:47.739488575 +0000 UTC m=+0.041477537 container create 409357cd88d82c4bb6406473ca9fa6b5d13e9f0536ba49c2b1d837c5ccedad58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:30:47 compute-0 systemd[1]: Started libpod-conmon-409357cd88d82c4bb6406473ca9fa6b5d13e9f0536ba49c2b1d837c5ccedad58.scope.
Jan 29 09:30:47 compute-0 podman[238026]: 2026-01-29 09:30:47.716423784 +0000 UTC m=+0.018412776 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:30:47 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:30:47 compute-0 podman[238026]: 2026-01-29 09:30:47.840197284 +0000 UTC m=+0.142186256 container init 409357cd88d82c4bb6406473ca9fa6b5d13e9f0536ba49c2b1d837c5ccedad58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_edison, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:30:47 compute-0 podman[238026]: 2026-01-29 09:30:47.84487643 +0000 UTC m=+0.146865392 container start 409357cd88d82c4bb6406473ca9fa6b5d13e9f0536ba49c2b1d837c5ccedad58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_edison, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 29 09:30:47 compute-0 interesting_edison[238043]: 167 167
Jan 29 09:30:47 compute-0 systemd[1]: libpod-409357cd88d82c4bb6406473ca9fa6b5d13e9f0536ba49c2b1d837c5ccedad58.scope: Deactivated successfully.
Jan 29 09:30:47 compute-0 podman[238026]: 2026-01-29 09:30:47.85155291 +0000 UTC m=+0.153541892 container attach 409357cd88d82c4bb6406473ca9fa6b5d13e9f0536ba49c2b1d837c5ccedad58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_edison, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 29 09:30:47 compute-0 podman[238026]: 2026-01-29 09:30:47.851876819 +0000 UTC m=+0.153865811 container died 409357cd88d82c4bb6406473ca9fa6b5d13e9f0536ba49c2b1d837c5ccedad58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_edison, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:30:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fbf39466630b8a336c5f745a81a3166ff86e4cebad21f0d2de666c057f8877b-merged.mount: Deactivated successfully.
Jan 29 09:30:47 compute-0 podman[238026]: 2026-01-29 09:30:47.895330148 +0000 UTC m=+0.197319110 container remove 409357cd88d82c4bb6406473ca9fa6b5d13e9f0536ba49c2b1d837c5ccedad58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:30:47 compute-0 systemd[1]: libpod-conmon-409357cd88d82c4bb6406473ca9fa6b5d13e9f0536ba49c2b1d837c5ccedad58.scope: Deactivated successfully.
Jan 29 09:30:48 compute-0 podman[238066]: 2026-01-29 09:30:48.016865818 +0000 UTC m=+0.040300145 container create bac5814a64a9ca66a9284e13b8f08659c882fccdd79c0adb2551db3871c52328 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_taussig, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:30:48 compute-0 systemd[1]: Started libpod-conmon-bac5814a64a9ca66a9284e13b8f08659c882fccdd79c0adb2551db3871c52328.scope.
Jan 29 09:30:48 compute-0 podman[238066]: 2026-01-29 09:30:47.995332579 +0000 UTC m=+0.018766876 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:30:48 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed890439c9597d898a83d061cf802c9ead51349ad205b8433417b18a7065e4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed890439c9597d898a83d061cf802c9ead51349ad205b8433417b18a7065e4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed890439c9597d898a83d061cf802c9ead51349ad205b8433417b18a7065e4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed890439c9597d898a83d061cf802c9ead51349ad205b8433417b18a7065e4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:30:48 compute-0 podman[238066]: 2026-01-29 09:30:48.111773322 +0000 UTC m=+0.135207619 container init bac5814a64a9ca66a9284e13b8f08659c882fccdd79c0adb2551db3871c52328 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_taussig, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 09:30:48 compute-0 podman[238066]: 2026-01-29 09:30:48.119716325 +0000 UTC m=+0.143150602 container start bac5814a64a9ca66a9284e13b8f08659c882fccdd79c0adb2551db3871c52328 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 29 09:30:48 compute-0 podman[238066]: 2026-01-29 09:30:48.123867697 +0000 UTC m=+0.147302214 container attach bac5814a64a9ca66a9284e13b8f08659c882fccdd79c0adb2551db3871c52328 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:30:48 compute-0 ceph-mon[75183]: pgmap v602: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:48 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2878083451' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:30:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v603: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:48 compute-0 lvm[238161]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:30:48 compute-0 lvm[238161]: VG ceph_vg1 finished
Jan 29 09:30:48 compute-0 lvm[238160]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:30:48 compute-0 lvm[238160]: VG ceph_vg0 finished
Jan 29 09:30:48 compute-0 lvm[238163]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:30:48 compute-0 lvm[238163]: VG ceph_vg2 finished
Jan 29 09:30:48 compute-0 wizardly_taussig[238082]: {}
Jan 29 09:30:48 compute-0 systemd[1]: libpod-bac5814a64a9ca66a9284e13b8f08659c882fccdd79c0adb2551db3871c52328.scope: Deactivated successfully.
Jan 29 09:30:48 compute-0 podman[238066]: 2026-01-29 09:30:48.944650573 +0000 UTC m=+0.968084850 container died bac5814a64a9ca66a9284e13b8f08659c882fccdd79c0adb2551db3871c52328 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_taussig, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 09:30:48 compute-0 systemd[1]: libpod-bac5814a64a9ca66a9284e13b8f08659c882fccdd79c0adb2551db3871c52328.scope: Consumed 1.244s CPU time.
Jan 29 09:30:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ed890439c9597d898a83d061cf802c9ead51349ad205b8433417b18a7065e4e-merged.mount: Deactivated successfully.
Jan 29 09:30:48 compute-0 podman[238066]: 2026-01-29 09:30:48.990065305 +0000 UTC m=+1.013499612 container remove bac5814a64a9ca66a9284e13b8f08659c882fccdd79c0adb2551db3871c52328 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_taussig, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:30:49 compute-0 systemd[1]: libpod-conmon-bac5814a64a9ca66a9284e13b8f08659c882fccdd79c0adb2551db3871c52328.scope: Deactivated successfully.
Jan 29 09:30:49 compute-0 sudo[237987]: pam_unix(sudo:session): session closed for user root
Jan 29 09:30:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:30:49 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:30:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:30:49 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:30:49 compute-0 sudo[238178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:30:49 compute-0 sudo[238178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:30:49 compute-0 sudo[238178]: pam_unix(sudo:session): session closed for user root
Jan 29 09:30:50 compute-0 ceph-mon[75183]: pgmap v603: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:50 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:30:50 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:30:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v604: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 09:30:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1085622553' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:30:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 09:30:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1085622553' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:30:52 compute-0 ceph-mon[75183]: pgmap v604: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/1085622553' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:30:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/1085622553' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:30:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:30:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v605: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:54 compute-0 ceph-mon[75183]: pgmap v605: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v606: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:30:55
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['volumes', 'backups', 'cephfs.cephfs.data', 'images', '.mgr', 'vms', 'cephfs.cephfs.meta']
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:30:56 compute-0 ceph-mon[75183]: pgmap v606: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:30:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v607: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:30:58 compute-0 ceph-mon[75183]: pgmap v607: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:30:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v608: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:00 compute-0 ceph-mon[75183]: pgmap v608: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v609: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:31:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:31:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:31:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:31:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:31:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:31:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:31:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:31:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:31:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:31:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:31:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:31:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0578630957479565e-06 of space, bias 4.0, pg target 0.0012694357148975478 quantized to 16 (current 32)
Jan 29 09:31:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:31:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:31:02 compute-0 ceph-mon[75183]: pgmap v609: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:02 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:31:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v610: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:04 compute-0 ceph-mon[75183]: pgmap v610: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v611: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:04 compute-0 sshd-session[238203]: Connection closed by 43.166.3.199 port 36470
Jan 29 09:31:06 compute-0 ceph-mon[75183]: pgmap v611: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v612: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:31:08 compute-0 ceph-mon[75183]: pgmap v612: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v613: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:31:09.032 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:31:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:31:09.034 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:31:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:31:09.034 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:31:10 compute-0 ceph-mon[75183]: pgmap v613: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v614: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:11 compute-0 ceph-mon[75183]: pgmap v614: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:31:11.441 152476 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:86:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:f9:50:a2:e1:9f'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 29 09:31:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:31:11.442 152476 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 29 09:31:11 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:31:11.442 152476 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=347a774e-f56f-46e9-8fb5-240ce07d1693, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 29 09:31:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:31:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v615: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:13 compute-0 podman[238204]: 2026-01-29 09:31:13.194004011 +0000 UTC m=+0.122504758 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 09:31:13 compute-0 ceph-mon[75183]: pgmap v615: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v616: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:15 compute-0 podman[238233]: 2026-01-29 09:31:15.125698627 +0000 UTC m=+0.061462635 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Jan 29 09:31:15 compute-0 ceph-mon[75183]: pgmap v616: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v617: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:17 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:31:17 compute-0 ceph-mon[75183]: pgmap v617: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v618: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:19 compute-0 ceph-mon[75183]: pgmap v618: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v619: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:21 compute-0 ceph-mon[75183]: pgmap v619: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:31:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v620: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:23 compute-0 ceph-mon[75183]: pgmap v620: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v621: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:25 compute-0 ceph-mon[75183]: pgmap v621: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:31:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:31:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:31:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:31:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:31:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:31:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v622: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:31:27 compute-0 ceph-mon[75183]: pgmap v622: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v623: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:29 compute-0 ceph-mon[75183]: pgmap v623: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v624: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:31 compute-0 ceph-mon[75183]: pgmap v624: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:31:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v625: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:33 compute-0 ceph-mon[75183]: pgmap v625: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v626: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:35 compute-0 ceph-mon[75183]: pgmap v626: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v627: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:31:37 compute-0 ceph-mon[75183]: pgmap v627: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:38 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:31:38 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3065 writes, 13K keys, 3065 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.01 MB/s
                                           Cumulative WAL: 3065 writes, 3065 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1284 writes, 5587 keys, 1284 commit groups, 1.0 writes per commit group, ingest: 5.71 MB, 0.01 MB/s
                                           Interval WAL: 1284 writes, 1284 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     96.0      0.10              0.03         6    0.017       0      0       0.0       0.0
                                             L6      1/0    4.56 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4    171.1    139.4      0.17              0.06         5    0.034     16K   2274       0.0       0.0
                                            Sum      1/0    4.56 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4    106.5    123.0      0.28              0.08        11    0.025     16K   2274       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.4    120.6    123.9      0.15              0.04         6    0.026     10K   1496       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    171.1    139.4      0.17              0.06         5    0.034     16K   2274       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     99.9      0.10              0.03         5    0.020       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.010, interval 0.004
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.03 GB write, 0.03 MB/s write, 0.03 GB read, 0.02 MB/s read, 0.3 seconds
                                           Interval compaction: 0.02 GB write, 0.03 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55621d63f8d0#2 capacity: 308.00 MB usage: 1.46 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(97,1.31 MB,0.423957%) FilterBlock(12,54.30 KB,0.0172157%) IndexBlock(12,100.52 KB,0.0318701%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 29 09:31:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v628: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:39 compute-0 ceph-mon[75183]: pgmap v628: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v629: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:41 compute-0 ceph-mon[75183]: pgmap v629: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:31:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v630: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:44 compute-0 ceph-mon[75183]: pgmap v630: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:44 compute-0 podman[238252]: 2026-01-29 09:31:44.170324667 +0000 UTC m=+0.108744117 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 09:31:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v631: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:46 compute-0 ceph-mon[75183]: pgmap v631: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:46 compute-0 podman[238278]: 2026-01-29 09:31:46.149930184 +0000 UTC m=+0.087360711 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 29 09:31:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v632: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:31:47 compute-0 nova_compute[236255]: 2026-01-29 09:31:47.681 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:31:47 compute-0 nova_compute[236255]: 2026-01-29 09:31:47.681 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:31:47 compute-0 nova_compute[236255]: 2026-01-29 09:31:47.682 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 29 09:31:47 compute-0 nova_compute[236255]: 2026-01-29 09:31:47.682 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 29 09:31:47 compute-0 nova_compute[236255]: 2026-01-29 09:31:47.802 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 29 09:31:47 compute-0 nova_compute[236255]: 2026-01-29 09:31:47.802 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:31:47 compute-0 nova_compute[236255]: 2026-01-29 09:31:47.802 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:31:47 compute-0 nova_compute[236255]: 2026-01-29 09:31:47.803 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:31:47 compute-0 nova_compute[236255]: 2026-01-29 09:31:47.803 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:31:47 compute-0 nova_compute[236255]: 2026-01-29 09:31:47.803 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:31:47 compute-0 nova_compute[236255]: 2026-01-29 09:31:47.803 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:31:47 compute-0 nova_compute[236255]: 2026-01-29 09:31:47.804 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 29 09:31:47 compute-0 nova_compute[236255]: 2026-01-29 09:31:47.804 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:31:47 compute-0 nova_compute[236255]: 2026-01-29 09:31:47.830 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:31:47 compute-0 nova_compute[236255]: 2026-01-29 09:31:47.830 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:31:47 compute-0 nova_compute[236255]: 2026-01-29 09:31:47.830 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:31:47 compute-0 nova_compute[236255]: 2026-01-29 09:31:47.830 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 29 09:31:47 compute-0 nova_compute[236255]: 2026-01-29 09:31:47.831 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:31:48 compute-0 ceph-mon[75183]: pgmap v632: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:31:48 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/331994315' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:31:48 compute-0 nova_compute[236255]: 2026-01-29 09:31:48.305 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:31:48 compute-0 nova_compute[236255]: 2026-01-29 09:31:48.440 236262 WARNING nova.virt.libvirt.driver [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 29 09:31:48 compute-0 nova_compute[236255]: 2026-01-29 09:31:48.441 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5304MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 29 09:31:48 compute-0 nova_compute[236255]: 2026-01-29 09:31:48.442 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:31:48 compute-0 nova_compute[236255]: 2026-01-29 09:31:48.442 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:31:48 compute-0 nova_compute[236255]: 2026-01-29 09:31:48.589 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 29 09:31:48 compute-0 nova_compute[236255]: 2026-01-29 09:31:48.590 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 29 09:31:48 compute-0 nova_compute[236255]: 2026-01-29 09:31:48.610 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:31:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v633: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:49 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/331994315' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:31:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:31:49 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3196198332' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:31:49 compute-0 sudo[238339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:31:49 compute-0 sudo[238339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:31:49 compute-0 sudo[238339]: pam_unix(sudo:session): session closed for user root
Jan 29 09:31:49 compute-0 nova_compute[236255]: 2026-01-29 09:31:49.207 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:31:49 compute-0 nova_compute[236255]: 2026-01-29 09:31:49.211 236262 DEBUG nova.compute.provider_tree [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed in ProviderTree for provider: 2689825d-8fa0-473a-adf1-5005faba9bec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 29 09:31:49 compute-0 sudo[238366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:31:49 compute-0 sudo[238366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:31:49 compute-0 nova_compute[236255]: 2026-01-29 09:31:49.461 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed for provider 2689825d-8fa0-473a-adf1-5005faba9bec based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 29 09:31:49 compute-0 nova_compute[236255]: 2026-01-29 09:31:49.462 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 29 09:31:49 compute-0 nova_compute[236255]: 2026-01-29 09:31:49.463 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:31:49 compute-0 sudo[238366]: pam_unix(sudo:session): session closed for user root
Jan 29 09:31:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:31:49 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:31:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:31:49 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:31:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:31:49 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:31:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:31:49 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:31:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:31:49 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:31:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:31:49 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:31:49 compute-0 sudo[238423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:31:49 compute-0 sudo[238423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:31:49 compute-0 sudo[238423]: pam_unix(sudo:session): session closed for user root
Jan 29 09:31:49 compute-0 sudo[238448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:31:49 compute-0 sudo[238448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:31:50 compute-0 ceph-mon[75183]: pgmap v633: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:50 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3196198332' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:31:50 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:31:50 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:31:50 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:31:50 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:31:50 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:31:50 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:31:50 compute-0 podman[238485]: 2026-01-29 09:31:50.083833168 +0000 UTC m=+0.038978030 container create b81017295e6391f257543e63e801c531918993920753739c8806f3b5bd2aa4ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elbakyan, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 09:31:50 compute-0 systemd[1]: Started libpod-conmon-b81017295e6391f257543e63e801c531918993920753739c8806f3b5bd2aa4ff.scope.
Jan 29 09:31:50 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:31:50 compute-0 podman[238485]: 2026-01-29 09:31:50.063675596 +0000 UTC m=+0.018820458 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:31:50 compute-0 podman[238485]: 2026-01-29 09:31:50.16638923 +0000 UTC m=+0.121534142 container init b81017295e6391f257543e63e801c531918993920753739c8806f3b5bd2aa4ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elbakyan, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:31:50 compute-0 podman[238485]: 2026-01-29 09:31:50.175785942 +0000 UTC m=+0.130930774 container start b81017295e6391f257543e63e801c531918993920753739c8806f3b5bd2aa4ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elbakyan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:31:50 compute-0 podman[238485]: 2026-01-29 09:31:50.180465788 +0000 UTC m=+0.135610670 container attach b81017295e6391f257543e63e801c531918993920753739c8806f3b5bd2aa4ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 29 09:31:50 compute-0 recursing_elbakyan[238500]: 167 167
Jan 29 09:31:50 compute-0 systemd[1]: libpod-b81017295e6391f257543e63e801c531918993920753739c8806f3b5bd2aa4ff.scope: Deactivated successfully.
Jan 29 09:31:50 compute-0 podman[238485]: 2026-01-29 09:31:50.182456822 +0000 UTC m=+0.137601694 container died b81017295e6391f257543e63e801c531918993920753739c8806f3b5bd2aa4ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elbakyan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 09:31:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-c242a7ef04a602d8e30888f91d58975eac18cedfe54206aead038958e892d985-merged.mount: Deactivated successfully.
Jan 29 09:31:50 compute-0 podman[238485]: 2026-01-29 09:31:50.222670724 +0000 UTC m=+0.177815586 container remove b81017295e6391f257543e63e801c531918993920753739c8806f3b5bd2aa4ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 29 09:31:50 compute-0 systemd[1]: libpod-conmon-b81017295e6391f257543e63e801c531918993920753739c8806f3b5bd2aa4ff.scope: Deactivated successfully.
Jan 29 09:31:50 compute-0 podman[238526]: 2026-01-29 09:31:50.41136332 +0000 UTC m=+0.056891651 container create 75ffe3076f252c28d8501ec83881144f850ccb14e3a9c35ed7e6481c90e0a66d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_franklin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:31:50 compute-0 systemd[1]: Started libpod-conmon-75ffe3076f252c28d8501ec83881144f850ccb14e3a9c35ed7e6481c90e0a66d.scope.
Jan 29 09:31:50 compute-0 podman[238526]: 2026-01-29 09:31:50.389529363 +0000 UTC m=+0.035057704 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:31:50 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35d4af4a392e1cfb3dac182d0bcffaca38703c42498025b92c4a7ceef14f9146/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35d4af4a392e1cfb3dac182d0bcffaca38703c42498025b92c4a7ceef14f9146/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35d4af4a392e1cfb3dac182d0bcffaca38703c42498025b92c4a7ceef14f9146/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35d4af4a392e1cfb3dac182d0bcffaca38703c42498025b92c4a7ceef14f9146/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35d4af4a392e1cfb3dac182d0bcffaca38703c42498025b92c4a7ceef14f9146/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:31:50 compute-0 podman[238526]: 2026-01-29 09:31:50.516285794 +0000 UTC m=+0.161814185 container init 75ffe3076f252c28d8501ec83881144f850ccb14e3a9c35ed7e6481c90e0a66d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_franklin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 09:31:50 compute-0 podman[238526]: 2026-01-29 09:31:50.531701298 +0000 UTC m=+0.177229629 container start 75ffe3076f252c28d8501ec83881144f850ccb14e3a9c35ed7e6481c90e0a66d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_franklin, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 29 09:31:50 compute-0 podman[238526]: 2026-01-29 09:31:50.536172209 +0000 UTC m=+0.181700550 container attach 75ffe3076f252c28d8501ec83881144f850ccb14e3a9c35ed7e6481c90e0a66d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:31:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v634: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:51 compute-0 beautiful_franklin[238544]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:31:51 compute-0 beautiful_franklin[238544]: --> All data devices are unavailable
Jan 29 09:31:51 compute-0 systemd[1]: libpod-75ffe3076f252c28d8501ec83881144f850ccb14e3a9c35ed7e6481c90e0a66d.scope: Deactivated successfully.
Jan 29 09:31:51 compute-0 conmon[238544]: conmon 75ffe3076f252c28d850 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-75ffe3076f252c28d8501ec83881144f850ccb14e3a9c35ed7e6481c90e0a66d.scope/container/memory.events
Jan 29 09:31:51 compute-0 podman[238564]: 2026-01-29 09:31:51.103462433 +0000 UTC m=+0.030662006 container died 75ffe3076f252c28d8501ec83881144f850ccb14e3a9c35ed7e6481c90e0a66d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 09:31:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-35d4af4a392e1cfb3dac182d0bcffaca38703c42498025b92c4a7ceef14f9146-merged.mount: Deactivated successfully.
Jan 29 09:31:51 compute-0 podman[238564]: 2026-01-29 09:31:51.141112827 +0000 UTC m=+0.068312360 container remove 75ffe3076f252c28d8501ec83881144f850ccb14e3a9c35ed7e6481c90e0a66d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_franklin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:31:51 compute-0 systemd[1]: libpod-conmon-75ffe3076f252c28d8501ec83881144f850ccb14e3a9c35ed7e6481c90e0a66d.scope: Deactivated successfully.
Jan 29 09:31:51 compute-0 sudo[238448]: pam_unix(sudo:session): session closed for user root
Jan 29 09:31:51 compute-0 sudo[238579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:31:51 compute-0 sudo[238579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:31:51 compute-0 sudo[238579]: pam_unix(sudo:session): session closed for user root
Jan 29 09:31:51 compute-0 sudo[238604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:31:51 compute-0 sudo[238604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:31:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 09:31:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3100930825' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:31:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 09:31:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3100930825' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:31:51 compute-0 podman[238642]: 2026-01-29 09:31:51.546574467 +0000 UTC m=+0.039469313 container create 7bb14fe7cb7baab7f48b4fc704f1018e16066963c003864dc4f355d03b0eac37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 29 09:31:51 compute-0 systemd[1]: Started libpod-conmon-7bb14fe7cb7baab7f48b4fc704f1018e16066963c003864dc4f355d03b0eac37.scope.
Jan 29 09:31:51 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:31:51 compute-0 podman[238642]: 2026-01-29 09:31:51.610491557 +0000 UTC m=+0.103386413 container init 7bb14fe7cb7baab7f48b4fc704f1018e16066963c003864dc4f355d03b0eac37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_raman, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:31:51 compute-0 podman[238642]: 2026-01-29 09:31:51.615880092 +0000 UTC m=+0.108774968 container start 7bb14fe7cb7baab7f48b4fc704f1018e16066963c003864dc4f355d03b0eac37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 29 09:31:51 compute-0 podman[238642]: 2026-01-29 09:31:51.621279807 +0000 UTC m=+0.114174663 container attach 7bb14fe7cb7baab7f48b4fc704f1018e16066963c003864dc4f355d03b0eac37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_raman, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:31:51 compute-0 eloquent_raman[238659]: 167 167
Jan 29 09:31:51 compute-0 systemd[1]: libpod-7bb14fe7cb7baab7f48b4fc704f1018e16066963c003864dc4f355d03b0eac37.scope: Deactivated successfully.
Jan 29 09:31:51 compute-0 podman[238642]: 2026-01-29 09:31:51.62249811 +0000 UTC m=+0.115392956 container died 7bb14fe7cb7baab7f48b4fc704f1018e16066963c003864dc4f355d03b0eac37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_raman, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 29 09:31:51 compute-0 podman[238642]: 2026-01-29 09:31:51.531873561 +0000 UTC m=+0.024768427 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:31:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-21326a75e90c68701507693a88acab3eaf926bf5dea70ce9968782cd262a5038-merged.mount: Deactivated successfully.
Jan 29 09:31:51 compute-0 podman[238642]: 2026-01-29 09:31:51.655096117 +0000 UTC m=+0.147990973 container remove 7bb14fe7cb7baab7f48b4fc704f1018e16066963c003864dc4f355d03b0eac37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_raman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:31:51 compute-0 systemd[1]: libpod-conmon-7bb14fe7cb7baab7f48b4fc704f1018e16066963c003864dc4f355d03b0eac37.scope: Deactivated successfully.
Jan 29 09:31:51 compute-0 podman[238684]: 2026-01-29 09:31:51.784188921 +0000 UTC m=+0.046838122 container create 2914571f5e94c46693159b569f52ae943854bf7c1c9c5e7ac761724ea4631d9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 09:31:51 compute-0 systemd[1]: Started libpod-conmon-2914571f5e94c46693159b569f52ae943854bf7c1c9c5e7ac761724ea4631d9b.scope.
Jan 29 09:31:51 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:31:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcdd39d2dacb87e13b309ab25284b4cba3a75af8dcbc32667db433446e26bb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:31:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcdd39d2dacb87e13b309ab25284b4cba3a75af8dcbc32667db433446e26bb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:31:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcdd39d2dacb87e13b309ab25284b4cba3a75af8dcbc32667db433446e26bb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:31:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcdd39d2dacb87e13b309ab25284b4cba3a75af8dcbc32667db433446e26bb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:31:51 compute-0 podman[238684]: 2026-01-29 09:31:51.759701982 +0000 UTC m=+0.022351223 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:31:51 compute-0 podman[238684]: 2026-01-29 09:31:51.861900962 +0000 UTC m=+0.124550143 container init 2914571f5e94c46693159b569f52ae943854bf7c1c9c5e7ac761724ea4631d9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:31:51 compute-0 podman[238684]: 2026-01-29 09:31:51.869396093 +0000 UTC m=+0.132045254 container start 2914571f5e94c46693159b569f52ae943854bf7c1c9c5e7ac761724ea4631d9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:31:51 compute-0 podman[238684]: 2026-01-29 09:31:51.872330962 +0000 UTC m=+0.134980203 container attach 2914571f5e94c46693159b569f52ae943854bf7c1c9c5e7ac761724ea4631d9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:31:52 compute-0 ceph-mon[75183]: pgmap v634: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/3100930825' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:31:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/3100930825' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]: {
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:     "0": [
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:         {
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "devices": [
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "/dev/loop3"
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             ],
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "lv_name": "ceph_lv0",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "lv_size": "21470642176",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "name": "ceph_lv0",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "tags": {
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.cluster_name": "ceph",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.crush_device_class": "",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.encrypted": "0",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.objectstore": "bluestore",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.osd_id": "0",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.type": "block",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.vdo": "0",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.with_tpm": "0"
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             },
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "type": "block",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "vg_name": "ceph_vg0"
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:         }
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:     ],
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:     "1": [
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:         {
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "devices": [
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "/dev/loop4"
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             ],
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "lv_name": "ceph_lv1",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "lv_size": "21470642176",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "name": "ceph_lv1",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "tags": {
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.cluster_name": "ceph",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.crush_device_class": "",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.encrypted": "0",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.objectstore": "bluestore",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.osd_id": "1",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.type": "block",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.vdo": "0",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.with_tpm": "0"
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             },
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "type": "block",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "vg_name": "ceph_vg1"
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:         }
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:     ],
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:     "2": [
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:         {
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "devices": [
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "/dev/loop5"
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             ],
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "lv_name": "ceph_lv2",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "lv_size": "21470642176",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "name": "ceph_lv2",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "tags": {
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.cluster_name": "ceph",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.crush_device_class": "",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.encrypted": "0",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.objectstore": "bluestore",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.osd_id": "2",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.type": "block",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.vdo": "0",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:                 "ceph.with_tpm": "0"
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             },
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "type": "block",
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:             "vg_name": "ceph_vg2"
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:         }
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]:     ]
Jan 29 09:31:52 compute-0 peaceful_leavitt[238701]: }
Jan 29 09:31:52 compute-0 systemd[1]: libpod-2914571f5e94c46693159b569f52ae943854bf7c1c9c5e7ac761724ea4631d9b.scope: Deactivated successfully.
Jan 29 09:31:52 compute-0 podman[238684]: 2026-01-29 09:31:52.153718624 +0000 UTC m=+0.416367785 container died 2914571f5e94c46693159b569f52ae943854bf7c1c9c5e7ac761724ea4631d9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_leavitt, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:31:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfcdd39d2dacb87e13b309ab25284b4cba3a75af8dcbc32667db433446e26bb2-merged.mount: Deactivated successfully.
Jan 29 09:31:52 compute-0 podman[238684]: 2026-01-29 09:31:52.201268723 +0000 UTC m=+0.463917884 container remove 2914571f5e94c46693159b569f52ae943854bf7c1c9c5e7ac761724ea4631d9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:31:52 compute-0 systemd[1]: libpod-conmon-2914571f5e94c46693159b569f52ae943854bf7c1c9c5e7ac761724ea4631d9b.scope: Deactivated successfully.
Jan 29 09:31:52 compute-0 sudo[238604]: pam_unix(sudo:session): session closed for user root
Jan 29 09:31:52 compute-0 sudo[238722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:31:52 compute-0 sudo[238722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:31:52 compute-0 sudo[238722]: pam_unix(sudo:session): session closed for user root
Jan 29 09:31:52 compute-0 sudo[238747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:31:52 compute-0 sudo[238747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:31:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:31:52 compute-0 podman[238786]: 2026-01-29 09:31:52.63185445 +0000 UTC m=+0.040094600 container create 71992b60bd642f07f712721346b88039f2160f11537642645614dd52ddedcdad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 09:31:52 compute-0 systemd[1]: Started libpod-conmon-71992b60bd642f07f712721346b88039f2160f11537642645614dd52ddedcdad.scope.
Jan 29 09:31:52 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:31:52 compute-0 podman[238786]: 2026-01-29 09:31:52.707916556 +0000 UTC m=+0.116156766 container init 71992b60bd642f07f712721346b88039f2160f11537642645614dd52ddedcdad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_raman, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 29 09:31:52 compute-0 podman[238786]: 2026-01-29 09:31:52.613381253 +0000 UTC m=+0.021621393 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:31:52 compute-0 podman[238786]: 2026-01-29 09:31:52.713913638 +0000 UTC m=+0.122153798 container start 71992b60bd642f07f712721346b88039f2160f11537642645614dd52ddedcdad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_raman, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:31:52 compute-0 podman[238786]: 2026-01-29 09:31:52.717509534 +0000 UTC m=+0.125749694 container attach 71992b60bd642f07f712721346b88039f2160f11537642645614dd52ddedcdad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_raman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:31:52 compute-0 compassionate_raman[238802]: 167 167
Jan 29 09:31:52 compute-0 systemd[1]: libpod-71992b60bd642f07f712721346b88039f2160f11537642645614dd52ddedcdad.scope: Deactivated successfully.
Jan 29 09:31:52 compute-0 podman[238786]: 2026-01-29 09:31:52.719636482 +0000 UTC m=+0.127876612 container died 71992b60bd642f07f712721346b88039f2160f11537642645614dd52ddedcdad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_raman, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:31:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e8d68ddd3e0e1761d29632d6a124cf0cd4a5ee1a22458d85e4c37fa2e60711e-merged.mount: Deactivated successfully.
Jan 29 09:31:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v635: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:52 compute-0 podman[238786]: 2026-01-29 09:31:52.761431376 +0000 UTC m=+0.169671536 container remove 71992b60bd642f07f712721346b88039f2160f11537642645614dd52ddedcdad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_raman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:31:52 compute-0 systemd[1]: libpod-conmon-71992b60bd642f07f712721346b88039f2160f11537642645614dd52ddedcdad.scope: Deactivated successfully.
Jan 29 09:31:52 compute-0 podman[238827]: 2026-01-29 09:31:52.925262415 +0000 UTC m=+0.036218976 container create 90637b8b1ba0b4de7b0d3e5b134ea1a06122fcfa5bfdfe9c053583983d91222e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:31:52 compute-0 systemd[1]: Started libpod-conmon-90637b8b1ba0b4de7b0d3e5b134ea1a06122fcfa5bfdfe9c053583983d91222e.scope.
Jan 29 09:31:52 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba3e5d9e8cdd108fd79144c40322457de3d84f29e96c6f8c5b2aaef358946988/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba3e5d9e8cdd108fd79144c40322457de3d84f29e96c6f8c5b2aaef358946988/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba3e5d9e8cdd108fd79144c40322457de3d84f29e96c6f8c5b2aaef358946988/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba3e5d9e8cdd108fd79144c40322457de3d84f29e96c6f8c5b2aaef358946988/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:31:53 compute-0 podman[238827]: 2026-01-29 09:31:52.909454219 +0000 UTC m=+0.020410770 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:31:53 compute-0 podman[238827]: 2026-01-29 09:31:53.007167229 +0000 UTC m=+0.118123810 container init 90637b8b1ba0b4de7b0d3e5b134ea1a06122fcfa5bfdfe9c053583983d91222e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_antonelli, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:31:53 compute-0 podman[238827]: 2026-01-29 09:31:53.017759073 +0000 UTC m=+0.128715614 container start 90637b8b1ba0b4de7b0d3e5b134ea1a06122fcfa5bfdfe9c053583983d91222e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:31:53 compute-0 podman[238827]: 2026-01-29 09:31:53.021091183 +0000 UTC m=+0.132047754 container attach 90637b8b1ba0b4de7b0d3e5b134ea1a06122fcfa5bfdfe9c053583983d91222e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:31:53 compute-0 lvm[238922]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:31:53 compute-0 lvm[238922]: VG ceph_vg1 finished
Jan 29 09:31:53 compute-0 lvm[238921]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:31:53 compute-0 lvm[238921]: VG ceph_vg0 finished
Jan 29 09:31:53 compute-0 lvm[238924]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:31:53 compute-0 lvm[238924]: VG ceph_vg2 finished
Jan 29 09:31:53 compute-0 quizzical_antonelli[238843]: {}
Jan 29 09:31:53 compute-0 systemd[1]: libpod-90637b8b1ba0b4de7b0d3e5b134ea1a06122fcfa5bfdfe9c053583983d91222e.scope: Deactivated successfully.
Jan 29 09:31:53 compute-0 podman[238827]: 2026-01-29 09:31:53.838244171 +0000 UTC m=+0.949200742 container died 90637b8b1ba0b4de7b0d3e5b134ea1a06122fcfa5bfdfe9c053583983d91222e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 29 09:31:53 compute-0 systemd[1]: libpod-90637b8b1ba0b4de7b0d3e5b134ea1a06122fcfa5bfdfe9c053583983d91222e.scope: Consumed 1.253s CPU time.
Jan 29 09:31:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba3e5d9e8cdd108fd79144c40322457de3d84f29e96c6f8c5b2aaef358946988-merged.mount: Deactivated successfully.
Jan 29 09:31:53 compute-0 podman[238827]: 2026-01-29 09:31:53.913213399 +0000 UTC m=+1.024169970 container remove 90637b8b1ba0b4de7b0d3e5b134ea1a06122fcfa5bfdfe9c053583983d91222e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:31:53 compute-0 systemd[1]: libpod-conmon-90637b8b1ba0b4de7b0d3e5b134ea1a06122fcfa5bfdfe9c053583983d91222e.scope: Deactivated successfully.
Jan 29 09:31:53 compute-0 sudo[238747]: pam_unix(sudo:session): session closed for user root
Jan 29 09:31:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:31:53 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:31:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:31:53 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:31:54 compute-0 sudo[238941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:31:54 compute-0 sudo[238941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:31:54 compute-0 sudo[238941]: pam_unix(sudo:session): session closed for user root
Jan 29 09:31:54 compute-0 ceph-mon[75183]: pgmap v635: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:54 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:31:54 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:31:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v636: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:31:56
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'backups', 'vms', '.mgr']
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:31:56 compute-0 ceph-mon[75183]: pgmap v636: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:31:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v637: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:31:58 compute-0 ceph-mon[75183]: pgmap v637: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:31:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v638: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:00 compute-0 ceph-mon[75183]: pgmap v638: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v639: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:32:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:32:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:32:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:32:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:32:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:32:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:32:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:32:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:32:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:32:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:32:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:32:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0578630957479565e-06 of space, bias 4.0, pg target 0.0012694357148975478 quantized to 16 (current 32)
Jan 29 09:32:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:32:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:32:02 compute-0 ceph-mon[75183]: pgmap v639: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:02 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:32:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v640: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:04 compute-0 ceph-mon[75183]: pgmap v640: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v641: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:06 compute-0 ceph-mon[75183]: pgmap v641: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v642: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:32:08 compute-0 ceph-mon[75183]: pgmap v642: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v643: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:32:09.034 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:32:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:32:09.035 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:32:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:32:09.035 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:32:10 compute-0 ceph-mon[75183]: pgmap v643: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v644: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:12 compute-0 ceph-mon[75183]: pgmap v644: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:32:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v645: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:14 compute-0 ceph-mon[75183]: pgmap v645: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v646: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:15 compute-0 podman[238966]: 2026-01-29 09:32:15.213205527 +0000 UTC m=+0.135683622 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 09:32:16 compute-0 ceph-mon[75183]: pgmap v646: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v647: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:17 compute-0 podman[238993]: 2026-01-29 09:32:17.114041343 +0000 UTC m=+0.052916255 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 29 09:32:17 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:32:18 compute-0 ceph-mon[75183]: pgmap v647: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v648: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:20 compute-0 ceph-mon[75183]: pgmap v648: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v649: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:22 compute-0 ceph-mon[75183]: pgmap v649: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:32:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v650: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:24 compute-0 ceph-mon[75183]: pgmap v650: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v651: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:26 compute-0 ceph-mon[75183]: pgmap v651: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:32:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:32:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:32:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:32:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:32:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:32:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v652: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:32:28 compute-0 ceph-mon[75183]: pgmap v652: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v653: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:30 compute-0 ceph-mon[75183]: pgmap v653: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v654: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:32 compute-0 ceph-mon[75183]: pgmap v654: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:32:32.551756) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679152551810, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 2022, "num_deletes": 506, "total_data_size": 1961889, "memory_usage": 1997152, "flush_reason": "Manual Compaction"}
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679152564279, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1914111, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12073, "largest_seqno": 14094, "table_properties": {"data_size": 1905419, "index_size": 4870, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 20041, "raw_average_key_size": 18, "raw_value_size": 1886032, "raw_average_value_size": 1747, "num_data_blocks": 224, "num_entries": 1079, "num_filter_entries": 1079, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769678957, "oldest_key_time": 1769678957, "file_creation_time": 1769679152, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 12601 microseconds, and 7358 cpu microseconds.
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:32:32.564349) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1914111 bytes OK
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:32:32.564378) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:32:32.566011) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:32:32.566025) EVENT_LOG_v1 {"time_micros": 1769679152566020, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:32:32.566042) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1952317, prev total WAL file size 1952317, number of live WAL files 2.
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:32:32.566432) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1869KB)], [32(4668KB)]
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679152566463, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 6694927, "oldest_snapshot_seqno": -1}
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3300 keys, 5265139 bytes, temperature: kUnknown
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679152604602, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 5265139, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5241281, "index_size": 14533, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 78431, "raw_average_key_size": 23, "raw_value_size": 5180128, "raw_average_value_size": 1569, "num_data_blocks": 630, "num_entries": 3300, "num_filter_entries": 3300, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677896, "oldest_key_time": 0, "file_creation_time": 1769679152, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:32:32.604916) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 5265139 bytes
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:32:32.606615) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.9 rd, 137.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 4.6 +0.0 blob) out(5.0 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 4325, records dropped: 1025 output_compression: NoCompression
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:32:32.606637) EVENT_LOG_v1 {"time_micros": 1769679152606625, "job": 14, "event": "compaction_finished", "compaction_time_micros": 38273, "compaction_time_cpu_micros": 20579, "output_level": 6, "num_output_files": 1, "total_output_size": 5265139, "num_input_records": 4325, "num_output_records": 3300, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679152607193, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679152608018, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:32:32.566360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:32:32.608200) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:32:32.608209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:32:32.608212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:32:32.608215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:32:32 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:32:32.608218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:32:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v655: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:33 compute-0 ceph-mon[75183]: pgmap v655: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v656: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:35 compute-0 ceph-mon[75183]: pgmap v656: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v657: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:32:37 compute-0 ceph-mon[75183]: pgmap v657: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v658: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 29 09:32:39 compute-0 ceph-mon[75183]: pgmap v658: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 29 09:32:39 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 29 09:32:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v660: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:40 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 29 09:32:40 compute-0 ceph-mon[75183]: osdmap e47: 3 total, 3 up, 3 in
Jan 29 09:32:40 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 29 09:32:40 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 29 09:32:41 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 29 09:32:41 compute-0 ceph-mon[75183]: pgmap v660: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:41 compute-0 ceph-mon[75183]: osdmap e48: 3 total, 3 up, 3 in
Jan 29 09:32:41 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 29 09:32:41 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 29 09:32:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:32:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v663: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:42 compute-0 ceph-mon[75183]: osdmap e49: 3 total, 3 up, 3 in
Jan 29 09:32:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 29 09:32:43 compute-0 ceph-mon[75183]: pgmap v663: 193 pgs: 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 29 09:32:43 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 29 09:32:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v665: 193 pgs: 193 active+clean; 41 MiB data, 117 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 8.3 MiB/s wr, 77 op/s
Jan 29 09:32:44 compute-0 ceph-mon[75183]: osdmap e50: 3 total, 3 up, 3 in
Jan 29 09:32:45 compute-0 ceph-mon[75183]: pgmap v665: 193 pgs: 193 active+clean; 41 MiB data, 117 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 8.3 MiB/s wr, 77 op/s
Jan 29 09:32:46 compute-0 podman[239013]: 2026-01-29 09:32:46.120949033 +0000 UTC m=+0.063844955 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller)
Jan 29 09:32:46 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:32:46 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4304 writes, 19K keys, 4304 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4304 writes, 400 syncs, 10.76 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 63 writes, 170 keys, 63 commit groups, 1.0 writes per commit group, ingest: 0.17 MB, 0.00 MB/s
                                           Interval WAL: 63 writes, 30 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 29 09:32:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v666: 193 pgs: 193 active+clean; 41 MiB data, 117 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 6.8 MiB/s wr, 63 op/s
Jan 29 09:32:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:32:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 29 09:32:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 29 09:32:47 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 29 09:32:47 compute-0 ceph-mon[75183]: pgmap v666: 193 pgs: 193 active+clean; 41 MiB data, 117 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 6.8 MiB/s wr, 63 op/s
Jan 29 09:32:47 compute-0 ceph-mon[75183]: osdmap e51: 3 total, 3 up, 3 in
Jan 29 09:32:48 compute-0 podman[239039]: 2026-01-29 09:32:48.109264863 +0000 UTC m=+0.049100904 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 09:32:48 compute-0 nova_compute[236255]: 2026-01-29 09:32:48.332 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:32:48 compute-0 nova_compute[236255]: 2026-01-29 09:32:48.333 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:32:48 compute-0 nova_compute[236255]: 2026-01-29 09:32:48.356 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:32:48 compute-0 nova_compute[236255]: 2026-01-29 09:32:48.356 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 29 09:32:48 compute-0 nova_compute[236255]: 2026-01-29 09:32:48.356 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 29 09:32:48 compute-0 nova_compute[236255]: 2026-01-29 09:32:48.374 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 29 09:32:48 compute-0 nova_compute[236255]: 2026-01-29 09:32:48.374 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:32:48 compute-0 nova_compute[236255]: 2026-01-29 09:32:48.374 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:32:48 compute-0 nova_compute[236255]: 2026-01-29 09:32:48.374 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:32:48 compute-0 nova_compute[236255]: 2026-01-29 09:32:48.375 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:32:48 compute-0 nova_compute[236255]: 2026-01-29 09:32:48.375 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:32:48 compute-0 nova_compute[236255]: 2026-01-29 09:32:48.375 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:32:48 compute-0 nova_compute[236255]: 2026-01-29 09:32:48.375 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 29 09:32:48 compute-0 nova_compute[236255]: 2026-01-29 09:32:48.375 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:32:48 compute-0 nova_compute[236255]: 2026-01-29 09:32:48.419 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:32:48 compute-0 nova_compute[236255]: 2026-01-29 09:32:48.419 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:32:48 compute-0 nova_compute[236255]: 2026-01-29 09:32:48.419 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:32:48 compute-0 nova_compute[236255]: 2026-01-29 09:32:48.419 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 29 09:32:48 compute-0 nova_compute[236255]: 2026-01-29 09:32:48.420 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:32:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v668: 193 pgs: 193 active+clean; 41 MiB data, 117 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 6.0 MiB/s wr, 55 op/s
Jan 29 09:32:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:32:48 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/431498183' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:32:48 compute-0 nova_compute[236255]: 2026-01-29 09:32:48.942 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:32:48 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/431498183' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:32:49 compute-0 nova_compute[236255]: 2026-01-29 09:32:49.095 236262 WARNING nova.virt.libvirt.driver [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 29 09:32:49 compute-0 nova_compute[236255]: 2026-01-29 09:32:49.096 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5290MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 29 09:32:49 compute-0 nova_compute[236255]: 2026-01-29 09:32:49.096 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:32:49 compute-0 nova_compute[236255]: 2026-01-29 09:32:49.097 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:32:49 compute-0 nova_compute[236255]: 2026-01-29 09:32:49.168 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 29 09:32:49 compute-0 nova_compute[236255]: 2026-01-29 09:32:49.169 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 29 09:32:49 compute-0 nova_compute[236255]: 2026-01-29 09:32:49.196 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:32:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:32:49 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/679317998' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:32:49 compute-0 nova_compute[236255]: 2026-01-29 09:32:49.720 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:32:49 compute-0 nova_compute[236255]: 2026-01-29 09:32:49.728 236262 DEBUG nova.compute.provider_tree [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed in ProviderTree for provider: 2689825d-8fa0-473a-adf1-5005faba9bec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 29 09:32:49 compute-0 nova_compute[236255]: 2026-01-29 09:32:49.748 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed for provider 2689825d-8fa0-473a-adf1-5005faba9bec based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 29 09:32:49 compute-0 nova_compute[236255]: 2026-01-29 09:32:49.751 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 29 09:32:49 compute-0 nova_compute[236255]: 2026-01-29 09:32:49.752 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:32:49 compute-0 ceph-mon[75183]: pgmap v668: 193 pgs: 193 active+clean; 41 MiB data, 117 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 6.0 MiB/s wr, 55 op/s
Jan 29 09:32:49 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/679317998' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:32:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v669: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 29 09:32:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 09:32:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/354469757' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:32:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 09:32:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/354469757' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:32:52 compute-0 ceph-mon[75183]: pgmap v669: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 29 09:32:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/354469757' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:32:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/354469757' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:32:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:32:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v670: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.6 MiB/s wr, 43 op/s
Jan 29 09:32:54 compute-0 ceph-mon[75183]: pgmap v670: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.6 MiB/s wr, 43 op/s
Jan 29 09:32:54 compute-0 sudo[239102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:32:54 compute-0 sudo[239102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:32:54 compute-0 sudo[239102]: pam_unix(sudo:session): session closed for user root
Jan 29 09:32:54 compute-0 sudo[239127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:32:54 compute-0 sudo[239127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:32:54 compute-0 sudo[239127]: pam_unix(sudo:session): session closed for user root
Jan 29 09:32:54 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:32:54 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:32:54 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:32:54 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:32:54 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:32:54 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:32:54 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:32:54 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:32:54 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:32:54 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:32:54 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:32:54 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:32:54 compute-0 sudo[239183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:32:54 compute-0 sudo[239183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:32:54 compute-0 sudo[239183]: pam_unix(sudo:session): session closed for user root
Jan 29 09:32:54 compute-0 sudo[239208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:32:54 compute-0 sudo[239208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:32:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v671: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:54 compute-0 podman[239245]: 2026-01-29 09:32:54.967246294 +0000 UTC m=+0.039011011 container create 6b47c4d89ca37231e40364684ce1fd17a5d82a5cb8dd53c8293d57758c9ba01e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cerf, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 09:32:55 compute-0 systemd[1]: Started libpod-conmon-6b47c4d89ca37231e40364684ce1fd17a5d82a5cb8dd53c8293d57758c9ba01e.scope.
Jan 29 09:32:55 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:32:55 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:32:55 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:32:55 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:32:55 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:32:55 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:32:55 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:32:55 compute-0 podman[239245]: 2026-01-29 09:32:55.038074207 +0000 UTC m=+0.109839014 container init 6b47c4d89ca37231e40364684ce1fd17a5d82a5cb8dd53c8293d57758c9ba01e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cerf, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 29 09:32:55 compute-0 podman[239245]: 2026-01-29 09:32:54.946628004 +0000 UTC m=+0.018392751 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:32:55 compute-0 podman[239245]: 2026-01-29 09:32:55.043608677 +0000 UTC m=+0.115373394 container start 6b47c4d89ca37231e40364684ce1fd17a5d82a5cb8dd53c8293d57758c9ba01e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cerf, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 29 09:32:55 compute-0 podman[239245]: 2026-01-29 09:32:55.046785904 +0000 UTC m=+0.118550721 container attach 6b47c4d89ca37231e40364684ce1fd17a5d82a5cb8dd53c8293d57758c9ba01e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cerf, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 29 09:32:55 compute-0 systemd[1]: libpod-6b47c4d89ca37231e40364684ce1fd17a5d82a5cb8dd53c8293d57758c9ba01e.scope: Deactivated successfully.
Jan 29 09:32:55 compute-0 compassionate_cerf[239261]: 167 167
Jan 29 09:32:55 compute-0 podman[239245]: 2026-01-29 09:32:55.051502362 +0000 UTC m=+0.123267109 container died 6b47c4d89ca37231e40364684ce1fd17a5d82a5cb8dd53c8293d57758c9ba01e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cerf, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:32:55 compute-0 conmon[239261]: conmon 6b47c4d89ca37231e403 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6b47c4d89ca37231e40364684ce1fd17a5d82a5cb8dd53c8293d57758c9ba01e.scope/container/memory.events
Jan 29 09:32:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ce017400b54e6b4fafde272bd17d931d5c0bd4f9d14677121df4eb7738944e9-merged.mount: Deactivated successfully.
Jan 29 09:32:55 compute-0 podman[239245]: 2026-01-29 09:32:55.095819985 +0000 UTC m=+0.167584732 container remove 6b47c4d89ca37231e40364684ce1fd17a5d82a5cb8dd53c8293d57758c9ba01e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cerf, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 29 09:32:55 compute-0 systemd[1]: libpod-conmon-6b47c4d89ca37231e40364684ce1fd17a5d82a5cb8dd53c8293d57758c9ba01e.scope: Deactivated successfully.
Jan 29 09:32:55 compute-0 podman[239285]: 2026-01-29 09:32:55.248281705 +0000 UTC m=+0.047402898 container create f7f2f2d1a7ca8ea4969a121b5ab330c53c0ec35ecb9bfc962f0884b745d7f05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Jan 29 09:32:55 compute-0 systemd[1]: Started libpod-conmon-f7f2f2d1a7ca8ea4969a121b5ab330c53c0ec35ecb9bfc962f0884b745d7f05b.scope.
Jan 29 09:32:55 compute-0 podman[239285]: 2026-01-29 09:32:55.224646263 +0000 UTC m=+0.023767456 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:32:55 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4406505f2588ca5f850637cb69c4410f8d2007240b19049b57007e64d2a7d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4406505f2588ca5f850637cb69c4410f8d2007240b19049b57007e64d2a7d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4406505f2588ca5f850637cb69c4410f8d2007240b19049b57007e64d2a7d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4406505f2588ca5f850637cb69c4410f8d2007240b19049b57007e64d2a7d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4406505f2588ca5f850637cb69c4410f8d2007240b19049b57007e64d2a7d4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:32:55 compute-0 podman[239285]: 2026-01-29 09:32:55.358070986 +0000 UTC m=+0.157192169 container init f7f2f2d1a7ca8ea4969a121b5ab330c53c0ec35ecb9bfc962f0884b745d7f05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 09:32:55 compute-0 podman[239285]: 2026-01-29 09:32:55.371396368 +0000 UTC m=+0.170517511 container start f7f2f2d1a7ca8ea4969a121b5ab330c53c0ec35ecb9bfc962f0884b745d7f05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_kilby, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 09:32:55 compute-0 podman[239285]: 2026-01-29 09:32:55.374480692 +0000 UTC m=+0.173601835 container attach f7f2f2d1a7ca8ea4969a121b5ab330c53c0ec35ecb9bfc962f0884b745d7f05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_kilby, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:32:55 compute-0 laughing_kilby[239302]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:32:55 compute-0 laughing_kilby[239302]: --> All data devices are unavailable
Jan 29 09:32:55 compute-0 systemd[1]: libpod-f7f2f2d1a7ca8ea4969a121b5ab330c53c0ec35ecb9bfc962f0884b745d7f05b.scope: Deactivated successfully.
Jan 29 09:32:55 compute-0 conmon[239302]: conmon f7f2f2d1a7ca8ea4969a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f7f2f2d1a7ca8ea4969a121b5ab330c53c0ec35ecb9bfc962f0884b745d7f05b.scope/container/memory.events
Jan 29 09:32:55 compute-0 podman[239285]: 2026-01-29 09:32:55.864107707 +0000 UTC m=+0.663228860 container died f7f2f2d1a7ca8ea4969a121b5ab330c53c0ec35ecb9bfc962f0884b745d7f05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 29 09:32:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b4406505f2588ca5f850637cb69c4410f8d2007240b19049b57007e64d2a7d4-merged.mount: Deactivated successfully.
Jan 29 09:32:55 compute-0 podman[239285]: 2026-01-29 09:32:55.89955486 +0000 UTC m=+0.698676003 container remove f7f2f2d1a7ca8ea4969a121b5ab330c53c0ec35ecb9bfc962f0884b745d7f05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_kilby, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:32:55 compute-0 systemd[1]: libpod-conmon-f7f2f2d1a7ca8ea4969a121b5ab330c53c0ec35ecb9bfc962f0884b745d7f05b.scope: Deactivated successfully.
Jan 29 09:32:55 compute-0 sudo[239208]: pam_unix(sudo:session): session closed for user root
Jan 29 09:32:55 compute-0 sudo[239334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:32:55 compute-0 sudo[239334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:32:55 compute-0 sudo[239334]: pam_unix(sudo:session): session closed for user root
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:32:56
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', '.mgr', 'backups', 'volumes']
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:32:56 compute-0 ceph-mon[75183]: pgmap v671: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:56 compute-0 sudo[239359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:32:56 compute-0 sudo[239359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:32:56 compute-0 podman[239396]: 2026-01-29 09:32:56.304085565 +0000 UTC m=+0.043994916 container create 39224c98ad3458707f75f8f6f60b0c053744a9ab9feb862019901275e37beca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_noether, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:32:56 compute-0 systemd[1]: Started libpod-conmon-39224c98ad3458707f75f8f6f60b0c053744a9ab9feb862019901275e37beca1.scope.
Jan 29 09:32:56 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:32:56 compute-0 podman[239396]: 2026-01-29 09:32:56.278983963 +0000 UTC m=+0.018893384 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:32:56 compute-0 podman[239396]: 2026-01-29 09:32:56.376440399 +0000 UTC m=+0.116349770 container init 39224c98ad3458707f75f8f6f60b0c053744a9ab9feb862019901275e37beca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_noether, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 29 09:32:56 compute-0 podman[239396]: 2026-01-29 09:32:56.383698866 +0000 UTC m=+0.123608207 container start 39224c98ad3458707f75f8f6f60b0c053744a9ab9feb862019901275e37beca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_noether, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:32:56 compute-0 podman[239396]: 2026-01-29 09:32:56.387342145 +0000 UTC m=+0.127251496 container attach 39224c98ad3458707f75f8f6f60b0c053744a9ab9feb862019901275e37beca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:32:56 compute-0 infallible_noether[239412]: 167 167
Jan 29 09:32:56 compute-0 systemd[1]: libpod-39224c98ad3458707f75f8f6f60b0c053744a9ab9feb862019901275e37beca1.scope: Deactivated successfully.
Jan 29 09:32:56 compute-0 podman[239396]: 2026-01-29 09:32:56.389028961 +0000 UTC m=+0.128938312 container died 39224c98ad3458707f75f8f6f60b0c053744a9ab9feb862019901275e37beca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_noether, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 09:32:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-761c57c1d64846fc4b6f5eb94caec596ba29aa032f130c86992cc2a9ed89d1da-merged.mount: Deactivated successfully.
Jan 29 09:32:56 compute-0 podman[239396]: 2026-01-29 09:32:56.427805164 +0000 UTC m=+0.167714525 container remove 39224c98ad3458707f75f8f6f60b0c053744a9ab9feb862019901275e37beca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:32:56 compute-0 systemd[1]: libpod-conmon-39224c98ad3458707f75f8f6f60b0c053744a9ab9feb862019901275e37beca1.scope: Deactivated successfully.
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:32:56 compute-0 podman[239437]: 2026-01-29 09:32:56.590948804 +0000 UTC m=+0.055832257 container create c111cb38f92f5bb41c4ca9df07714b8be59e6e82bfd2a3a9dcd1cb1606741812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_cray, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 09:32:56 compute-0 systemd[1]: Started libpod-conmon-c111cb38f92f5bb41c4ca9df07714b8be59e6e82bfd2a3a9dcd1cb1606741812.scope.
Jan 29 09:32:56 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9cc3968bfbd653185f085477cd698e2a3aebbce5c0f390468583b6aa2c64b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9cc3968bfbd653185f085477cd698e2a3aebbce5c0f390468583b6aa2c64b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9cc3968bfbd653185f085477cd698e2a3aebbce5c0f390468583b6aa2c64b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9cc3968bfbd653185f085477cd698e2a3aebbce5c0f390468583b6aa2c64b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:32:56 compute-0 podman[239437]: 2026-01-29 09:32:56.569399779 +0000 UTC m=+0.034283272 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:32:56 compute-0 podman[239437]: 2026-01-29 09:32:56.668177081 +0000 UTC m=+0.133060554 container init c111cb38f92f5bb41c4ca9df07714b8be59e6e82bfd2a3a9dcd1cb1606741812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_cray, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:32:56 compute-0 podman[239437]: 2026-01-29 09:32:56.673623959 +0000 UTC m=+0.138507412 container start c111cb38f92f5bb41c4ca9df07714b8be59e6e82bfd2a3a9dcd1cb1606741812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_cray, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:32:56 compute-0 podman[239437]: 2026-01-29 09:32:56.6784622 +0000 UTC m=+0.143345693 container attach c111cb38f92f5bb41c4ca9df07714b8be59e6e82bfd2a3a9dcd1cb1606741812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_cray, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:32:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v672: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:56 compute-0 eloquent_cray[239453]: {
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:     "0": [
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:         {
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "devices": [
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "/dev/loop3"
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             ],
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "lv_name": "ceph_lv0",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "lv_size": "21470642176",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "name": "ceph_lv0",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "tags": {
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.cluster_name": "ceph",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.crush_device_class": "",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.encrypted": "0",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.objectstore": "bluestore",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.osd_id": "0",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.type": "block",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.vdo": "0",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.with_tpm": "0"
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             },
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "type": "block",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "vg_name": "ceph_vg0"
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:         }
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:     ],
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:     "1": [
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:         {
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "devices": [
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "/dev/loop4"
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             ],
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "lv_name": "ceph_lv1",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "lv_size": "21470642176",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "name": "ceph_lv1",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "tags": {
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.cluster_name": "ceph",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.crush_device_class": "",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.encrypted": "0",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.objectstore": "bluestore",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.osd_id": "1",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.type": "block",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.vdo": "0",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.with_tpm": "0"
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             },
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "type": "block",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "vg_name": "ceph_vg1"
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:         }
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:     ],
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:     "2": [
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:         {
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "devices": [
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "/dev/loop5"
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             ],
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "lv_name": "ceph_lv2",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "lv_size": "21470642176",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "name": "ceph_lv2",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "tags": {
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.cluster_name": "ceph",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.crush_device_class": "",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.encrypted": "0",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.objectstore": "bluestore",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.osd_id": "2",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.type": "block",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.vdo": "0",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:                 "ceph.with_tpm": "0"
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             },
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "type": "block",
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:             "vg_name": "ceph_vg2"
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:         }
Jan 29 09:32:56 compute-0 eloquent_cray[239453]:     ]
Jan 29 09:32:56 compute-0 eloquent_cray[239453]: }
Jan 29 09:32:56 compute-0 systemd[1]: libpod-c111cb38f92f5bb41c4ca9df07714b8be59e6e82bfd2a3a9dcd1cb1606741812.scope: Deactivated successfully.
Jan 29 09:32:56 compute-0 podman[239437]: 2026-01-29 09:32:56.964069496 +0000 UTC m=+0.428952949 container died c111cb38f92f5bb41c4ca9df07714b8be59e6e82bfd2a3a9dcd1cb1606741812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:32:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd9cc3968bfbd653185f085477cd698e2a3aebbce5c0f390468583b6aa2c64b3-merged.mount: Deactivated successfully.
Jan 29 09:32:57 compute-0 podman[239437]: 2026-01-29 09:32:57.005226993 +0000 UTC m=+0.470110446 container remove c111cb38f92f5bb41c4ca9df07714b8be59e6e82bfd2a3a9dcd1cb1606741812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 29 09:32:57 compute-0 systemd[1]: libpod-conmon-c111cb38f92f5bb41c4ca9df07714b8be59e6e82bfd2a3a9dcd1cb1606741812.scope: Deactivated successfully.
Jan 29 09:32:57 compute-0 sudo[239359]: pam_unix(sudo:session): session closed for user root
Jan 29 09:32:57 compute-0 sudo[239473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:32:57 compute-0 sudo[239473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:32:57 compute-0 sudo[239473]: pam_unix(sudo:session): session closed for user root
Jan 29 09:32:57 compute-0 sudo[239498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:32:57 compute-0 sudo[239498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:32:57 compute-0 podman[239535]: 2026-01-29 09:32:57.43927159 +0000 UTC m=+0.038505577 container create 51d81f3922433de49ce2ed2d01f2e20685f4f9edab982ce504f9c0833705ddb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_davinci, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 29 09:32:57 compute-0 systemd[1]: Started libpod-conmon-51d81f3922433de49ce2ed2d01f2e20685f4f9edab982ce504f9c0833705ddb1.scope.
Jan 29 09:32:57 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:32:57 compute-0 podman[239535]: 2026-01-29 09:32:57.421305072 +0000 UTC m=+0.020539099 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:32:57 compute-0 podman[239535]: 2026-01-29 09:32:57.518237524 +0000 UTC m=+0.117471491 container init 51d81f3922433de49ce2ed2d01f2e20685f4f9edab982ce504f9c0833705ddb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 29 09:32:57 compute-0 podman[239535]: 2026-01-29 09:32:57.525357047 +0000 UTC m=+0.124591004 container start 51d81f3922433de49ce2ed2d01f2e20685f4f9edab982ce504f9c0833705ddb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_davinci, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:32:57 compute-0 sweet_davinci[239551]: 167 167
Jan 29 09:32:57 compute-0 systemd[1]: libpod-51d81f3922433de49ce2ed2d01f2e20685f4f9edab982ce504f9c0833705ddb1.scope: Deactivated successfully.
Jan 29 09:32:57 compute-0 podman[239535]: 2026-01-29 09:32:57.530495517 +0000 UTC m=+0.129729494 container attach 51d81f3922433de49ce2ed2d01f2e20685f4f9edab982ce504f9c0833705ddb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:32:57 compute-0 podman[239535]: 2026-01-29 09:32:57.530862887 +0000 UTC m=+0.130096844 container died 51d81f3922433de49ce2ed2d01f2e20685f4f9edab982ce504f9c0833705ddb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_davinci, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:32:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f82bffcbabefe68a564c9aa5214b07f5859d7465fee8f6b18304a16095036758-merged.mount: Deactivated successfully.
Jan 29 09:32:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:32:57 compute-0 podman[239535]: 2026-01-29 09:32:57.563732169 +0000 UTC m=+0.162966126 container remove 51d81f3922433de49ce2ed2d01f2e20685f4f9edab982ce504f9c0833705ddb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_davinci, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 29 09:32:57 compute-0 systemd[1]: libpod-conmon-51d81f3922433de49ce2ed2d01f2e20685f4f9edab982ce504f9c0833705ddb1.scope: Deactivated successfully.
Jan 29 09:32:57 compute-0 podman[239574]: 2026-01-29 09:32:57.70038701 +0000 UTC m=+0.049299710 container create 3aa151db905eba59fc7d942ca66b67b9f7eb0134d4b150815dd6468506dc7be9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_turing, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:32:57 compute-0 systemd[1]: Started libpod-conmon-3aa151db905eba59fc7d942ca66b67b9f7eb0134d4b150815dd6468506dc7be9.scope.
Jan 29 09:32:57 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cea57986976edfc5476c7b426f9dd568e84ff8ec38d2dd82da199ceb4a19fbfa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cea57986976edfc5476c7b426f9dd568e84ff8ec38d2dd82da199ceb4a19fbfa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cea57986976edfc5476c7b426f9dd568e84ff8ec38d2dd82da199ceb4a19fbfa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cea57986976edfc5476c7b426f9dd568e84ff8ec38d2dd82da199ceb4a19fbfa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:32:57 compute-0 podman[239574]: 2026-01-29 09:32:57.681458056 +0000 UTC m=+0.030370836 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:32:57 compute-0 podman[239574]: 2026-01-29 09:32:57.792188113 +0000 UTC m=+0.141100843 container init 3aa151db905eba59fc7d942ca66b67b9f7eb0134d4b150815dd6468506dc7be9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:32:57 compute-0 podman[239574]: 2026-01-29 09:32:57.797516607 +0000 UTC m=+0.146429327 container start 3aa151db905eba59fc7d942ca66b67b9f7eb0134d4b150815dd6468506dc7be9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_turing, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:32:57 compute-0 podman[239574]: 2026-01-29 09:32:57.801319181 +0000 UTC m=+0.150231911 container attach 3aa151db905eba59fc7d942ca66b67b9f7eb0134d4b150815dd6468506dc7be9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_turing, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:32:58 compute-0 ceph-mon[75183]: pgmap v672: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:58 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:32:58 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Cumulative writes: 4402 writes, 20K keys, 4402 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4402 writes, 440 syncs, 10.00 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 80 writes, 336 keys, 80 commit groups, 1.0 writes per commit group, ingest: 0.20 MB, 0.00 MB/s
                                           Interval WAL: 80 writes, 34 syncs, 2.35 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 29 09:32:58 compute-0 lvm[239669]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:32:58 compute-0 lvm[239672]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:32:58 compute-0 lvm[239670]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:32:58 compute-0 lvm[239670]: VG ceph_vg1 finished
Jan 29 09:32:58 compute-0 lvm[239672]: VG ceph_vg2 finished
Jan 29 09:32:58 compute-0 lvm[239669]: VG ceph_vg0 finished
Jan 29 09:32:58 compute-0 lvm[239673]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:32:58 compute-0 lvm[239673]: VG ceph_vg2 finished
Jan 29 09:32:58 compute-0 elegant_turing[239591]: {}
Jan 29 09:32:58 compute-0 systemd[1]: libpod-3aa151db905eba59fc7d942ca66b67b9f7eb0134d4b150815dd6468506dc7be9.scope: Deactivated successfully.
Jan 29 09:32:58 compute-0 systemd[1]: libpod-3aa151db905eba59fc7d942ca66b67b9f7eb0134d4b150815dd6468506dc7be9.scope: Consumed 1.127s CPU time.
Jan 29 09:32:58 compute-0 podman[239574]: 2026-01-29 09:32:58.574847884 +0000 UTC m=+0.923760574 container died 3aa151db905eba59fc7d942ca66b67b9f7eb0134d4b150815dd6468506dc7be9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_turing, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:32:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-cea57986976edfc5476c7b426f9dd568e84ff8ec38d2dd82da199ceb4a19fbfa-merged.mount: Deactivated successfully.
Jan 29 09:32:58 compute-0 podman[239574]: 2026-01-29 09:32:58.652747229 +0000 UTC m=+1.001659959 container remove 3aa151db905eba59fc7d942ca66b67b9f7eb0134d4b150815dd6468506dc7be9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_turing, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 09:32:58 compute-0 systemd[1]: libpod-conmon-3aa151db905eba59fc7d942ca66b67b9f7eb0134d4b150815dd6468506dc7be9.scope: Deactivated successfully.
Jan 29 09:32:58 compute-0 sudo[239498]: pam_unix(sudo:session): session closed for user root
Jan 29 09:32:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:32:58 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:32:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:32:58 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:32:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v673: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:32:58 compute-0 sudo[239687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:32:58 compute-0 sudo[239687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:32:58 compute-0 sudo[239687]: pam_unix(sudo:session): session closed for user root
Jan 29 09:32:59 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:32:59 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:32:59 compute-0 ceph-mon[75183]: pgmap v673: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:33:00 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:33:00.509 152476 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:86:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:f9:50:a2:e1:9f'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 29 09:33:00 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:33:00.510 152476 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 29 09:33:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v674: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:33:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:33:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:33:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:33:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:33:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:33:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:33:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:33:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:33:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:33:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:33:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659385972775355 of space, bias 1.0, pg target 0.19978157918326064 quantized to 32 (current 32)
Jan 29 09:33:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:33:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1151661481321375e-06 of space, bias 4.0, pg target 0.001338199377758565 quantized to 16 (current 32)
Jan 29 09:33:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:33:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:33:01 compute-0 ceph-mon[75183]: pgmap v674: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:33:02 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:33:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v675: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:33:03 compute-0 ceph-mon[75183]: pgmap v675: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:33:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v676: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:33:05 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:33:05.512 152476 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=347a774e-f56f-46e9-8fb5-240ce07d1693, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 29 09:33:05 compute-0 ceph-mon[75183]: pgmap v676: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:33:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v677: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:33:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:33:07 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:33:07 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Cumulative writes: 4249 writes, 19K keys, 4249 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4249 writes, 388 syncs, 10.95 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 78 writes, 338 keys, 78 commit groups, 1.0 writes per commit group, ingest: 0.24 MB, 0.00 MB/s
                                           Interval WAL: 78 writes, 31 syncs, 2.52 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 29 09:33:07 compute-0 ceph-mon[75183]: pgmap v677: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:33:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v678: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:33:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:33:09.034 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:33:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:33:09.035 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:33:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:33:09.035 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:33:09 compute-0 ceph-mon[75183]: pgmap v678: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:33:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v679: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:33:11 compute-0 ceph-mon[75183]: pgmap v679: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:33:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:33:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v680: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:33:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 29 09:33:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 29 09:33:12 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 29 09:33:13 compute-0 ceph-mon[75183]: pgmap v680: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:33:13 compute-0 ceph-mon[75183]: osdmap e52: 3 total, 3 up, 3 in
Jan 29 09:33:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v682: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 3.2 KiB/s wr, 37 op/s
Jan 29 09:33:14 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 29 09:33:14 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 29 09:33:14 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 29 09:33:15 compute-0 ceph-mon[75183]: pgmap v682: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 3.2 KiB/s wr, 37 op/s
Jan 29 09:33:15 compute-0 ceph-mon[75183]: osdmap e53: 3 total, 3 up, 3 in
Jan 29 09:33:16 compute-0 ceph-mgr[75473]: [devicehealth INFO root] Check health
Jan 29 09:33:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v684: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 4.0 KiB/s wr, 47 op/s
Jan 29 09:33:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 29 09:33:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 29 09:33:16 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 29 09:33:17 compute-0 podman[239712]: 2026-01-29 09:33:17.160940922 +0000 UTC m=+0.092033280 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true)
Jan 29 09:33:17 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:33:17 compute-0 ceph-mon[75183]: pgmap v684: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 4.0 KiB/s wr, 47 op/s
Jan 29 09:33:17 compute-0 ceph-mon[75183]: osdmap e54: 3 total, 3 up, 3 in
Jan 29 09:33:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v686: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 63 op/s
Jan 29 09:33:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 29 09:33:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 29 09:33:18 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 29 09:33:19 compute-0 podman[239740]: 2026-01-29 09:33:19.156448987 +0000 UTC m=+0.092450612 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 29 09:33:19 compute-0 ceph-mon[75183]: pgmap v686: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 63 op/s
Jan 29 09:33:19 compute-0 ceph-mon[75183]: osdmap e55: 3 total, 3 up, 3 in
Jan 29 09:33:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v688: 193 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 178 active+clean; 153 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 19 MiB/s wr, 171 op/s
Jan 29 09:33:20 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 29 09:33:20 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 29 09:33:20 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 29 09:33:21 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 29 09:33:21 compute-0 ceph-mon[75183]: pgmap v688: 193 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 178 active+clean; 153 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 19 MiB/s wr, 171 op/s
Jan 29 09:33:21 compute-0 ceph-mon[75183]: osdmap e56: 3 total, 3 up, 3 in
Jan 29 09:33:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 29 09:33:22 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 29 09:33:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:33:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 29 09:33:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 29 09:33:22 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 29 09:33:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v692: 193 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 178 active+clean; 153 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 181 KiB/s rd, 28 MiB/s wr, 256 op/s
Jan 29 09:33:23 compute-0 ceph-mon[75183]: osdmap e57: 3 total, 3 up, 3 in
Jan 29 09:33:23 compute-0 ceph-mon[75183]: osdmap e58: 3 total, 3 up, 3 in
Jan 29 09:33:24 compute-0 ceph-mon[75183]: pgmap v692: 193 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 178 active+clean; 153 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 181 KiB/s rd, 28 MiB/s wr, 256 op/s
Jan 29 09:33:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 29 09:33:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 29 09:33:24 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 29 09:33:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v694: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 10 KiB/s wr, 163 op/s
Jan 29 09:33:25 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 29 09:33:25 compute-0 ceph-mon[75183]: osdmap e59: 3 total, 3 up, 3 in
Jan 29 09:33:25 compute-0 ceph-mon[75183]: pgmap v694: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 10 KiB/s wr, 163 op/s
Jan 29 09:33:25 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 29 09:33:25 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 29 09:33:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:33:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:33:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:33:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:33:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:33:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:33:26 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 29 09:33:26 compute-0 ceph-mon[75183]: osdmap e60: 3 total, 3 up, 3 in
Jan 29 09:33:26 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 29 09:33:26 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 29 09:33:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v697: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 9.5 KiB/s wr, 154 op/s
Jan 29 09:33:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:33:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 29 09:33:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 29 09:33:27 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 29 09:33:27 compute-0 ceph-mon[75183]: osdmap e61: 3 total, 3 up, 3 in
Jan 29 09:33:27 compute-0 ceph-mon[75183]: pgmap v697: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 9.5 KiB/s wr, 154 op/s
Jan 29 09:33:27 compute-0 ceph-mon[75183]: osdmap e62: 3 total, 3 up, 3 in
Jan 29 09:33:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v699: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:33:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 29 09:33:29 compute-0 ceph-mon[75183]: pgmap v699: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:33:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 29 09:33:29 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 29 09:33:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v701: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 9.6 KiB/s wr, 107 op/s
Jan 29 09:33:30 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 29 09:33:30 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 29 09:33:30 compute-0 ceph-mon[75183]: osdmap e63: 3 total, 3 up, 3 in
Jan 29 09:33:30 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 29 09:33:31 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 29 09:33:31 compute-0 ceph-mon[75183]: pgmap v701: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 9.6 KiB/s wr, 107 op/s
Jan 29 09:33:31 compute-0 ceph-mon[75183]: osdmap e64: 3 total, 3 up, 3 in
Jan 29 09:33:31 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 29 09:33:31 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 29 09:33:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:33:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 29 09:33:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 29 09:33:32 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 29 09:33:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v705: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 12 KiB/s wr, 137 op/s
Jan 29 09:33:32 compute-0 ceph-mon[75183]: osdmap e65: 3 total, 3 up, 3 in
Jan 29 09:33:32 compute-0 ceph-mon[75183]: osdmap e66: 3 total, 3 up, 3 in
Jan 29 09:33:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 29 09:33:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 29 09:33:33 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 29 09:33:33 compute-0 ceph-mon[75183]: pgmap v705: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 12 KiB/s wr, 137 op/s
Jan 29 09:33:33 compute-0 ceph-mon[75183]: osdmap e67: 3 total, 3 up, 3 in
Jan 29 09:33:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v707: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 158 KiB/s rd, 22 KiB/s wr, 221 op/s
Jan 29 09:33:34 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 29 09:33:34 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 29 09:33:34 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 29 09:33:35 compute-0 ceph-mon[75183]: pgmap v707: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 158 KiB/s rd, 22 KiB/s wr, 221 op/s
Jan 29 09:33:35 compute-0 ceph-mon[75183]: osdmap e68: 3 total, 3 up, 3 in
Jan 29 09:33:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v709: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 130 KiB/s rd, 19 KiB/s wr, 182 op/s
Jan 29 09:33:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:33:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 29 09:33:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 29 09:33:37 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 29 09:33:37 compute-0 nova_compute[236255]: 2026-01-29 09:33:37.649 236262 DEBUG oslo_concurrency.lockutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Acquiring lock "91dbe8b1-3e4d-436e-966e-05af44b988c4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:33:37 compute-0 nova_compute[236255]: 2026-01-29 09:33:37.650 236262 DEBUG oslo_concurrency.lockutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "91dbe8b1-3e4d-436e-966e-05af44b988c4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:33:37 compute-0 nova_compute[236255]: 2026-01-29 09:33:37.674 236262 DEBUG nova.compute.manager [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 29 09:33:37 compute-0 nova_compute[236255]: 2026-01-29 09:33:37.792 236262 DEBUG oslo_concurrency.lockutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:33:37 compute-0 nova_compute[236255]: 2026-01-29 09:33:37.793 236262 DEBUG oslo_concurrency.lockutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:33:37 compute-0 nova_compute[236255]: 2026-01-29 09:33:37.804 236262 DEBUG nova.virt.hardware [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 29 09:33:37 compute-0 nova_compute[236255]: 2026-01-29 09:33:37.805 236262 INFO nova.compute.claims [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Claim successful on node compute-0.ctlplane.example.com
Jan 29 09:33:37 compute-0 nova_compute[236255]: 2026-01-29 09:33:37.914 236262 DEBUG oslo_concurrency.processutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:33:37 compute-0 ceph-mon[75183]: pgmap v709: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 130 KiB/s rd, 19 KiB/s wr, 182 op/s
Jan 29 09:33:37 compute-0 ceph-mon[75183]: osdmap e69: 3 total, 3 up, 3 in
Jan 29 09:33:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:33:38 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1128435520' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:33:38 compute-0 nova_compute[236255]: 2026-01-29 09:33:38.475 236262 DEBUG oslo_concurrency.processutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:33:38 compute-0 nova_compute[236255]: 2026-01-29 09:33:38.483 236262 DEBUG nova.compute.provider_tree [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Inventory has not changed in ProviderTree for provider: 2689825d-8fa0-473a-adf1-5005faba9bec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 29 09:33:38 compute-0 nova_compute[236255]: 2026-01-29 09:33:38.507 236262 DEBUG nova.scheduler.client.report [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Inventory has not changed for provider 2689825d-8fa0-473a-adf1-5005faba9bec based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 29 09:33:38 compute-0 nova_compute[236255]: 2026-01-29 09:33:38.530 236262 DEBUG oslo_concurrency.lockutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:33:38 compute-0 nova_compute[236255]: 2026-01-29 09:33:38.531 236262 DEBUG nova.compute.manager [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 29 09:33:38 compute-0 nova_compute[236255]: 2026-01-29 09:33:38.575 236262 DEBUG nova.compute.manager [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 29 09:33:38 compute-0 nova_compute[236255]: 2026-01-29 09:33:38.576 236262 DEBUG nova.network.neutron [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 29 09:33:38 compute-0 nova_compute[236255]: 2026-01-29 09:33:38.602 236262 INFO nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 29 09:33:38 compute-0 nova_compute[236255]: 2026-01-29 09:33:38.628 236262 DEBUG nova.compute.manager [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 29 09:33:38 compute-0 nova_compute[236255]: 2026-01-29 09:33:38.674 236262 INFO nova.virt.block_device [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Booting with volume d774a831-6191-4c45-a2c6-9de804ae8122 at /dev/vda
Jan 29 09:33:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v711: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 15 KiB/s wr, 147 op/s
Jan 29 09:33:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 29 09:33:38 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1128435520' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:33:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 29 09:33:38 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.131 236262 DEBUG os_brick.utils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.133 236262 INFO oslo.privsep.daemon [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmp22jm5qrv/privsep.sock']
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.148 236262 DEBUG nova.network.neutron [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.150 236262 DEBUG nova.compute.manager [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.792 236262 INFO oslo.privsep.daemon [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Spawned new privsep daemon via rootwrap
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.688 239785 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.694 239785 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.697 239785 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.697 239785 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239785
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.795 239785 DEBUG oslo.privsep.daemon [-] privsep: reply[7f874b07-9630-4619-9c96-3c0c9dfe53b4]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.878 239785 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.888 239785 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.888 239785 DEBUG oslo.privsep.daemon [-] privsep: reply[2d545319-aa41-4a99-8c6f-3ba136c6907a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.889 239785 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.893 239785 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.893 239785 DEBUG oslo.privsep.daemon [-] privsep: reply[a8de0ad9-6ae7-42ec-84b8-e28476e8069b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5cc8582af5', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.895 239785 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.904 239785 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.905 239785 DEBUG oslo.privsep.daemon [-] privsep: reply[8ebfd412-0add-49c4-92d5-befff719c701]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.906 239785 DEBUG oslo.privsep.daemon [-] privsep: reply[d2bc69f0-6f54-40c5-8f8c-5b4669dd4f07]: (4, 'ff47f49d-26ab-48e1-aa1a-aeb921932033') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.907 236262 DEBUG oslo_concurrency.processutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.917 236262 DEBUG oslo_concurrency.processutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] CMD "nvme version" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.920 236262 DEBUG os_brick.initiator.connectors.lightos [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.920 236262 DEBUG os_brick.initiator.connectors.lightos [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.921 236262 DEBUG os_brick.initiator.connectors.lightos [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.921 236262 DEBUG os_brick.utils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] <== get_connector_properties: return (788ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5cc8582af5', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'ff47f49d-26ab-48e1-aa1a-aeb921932033', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 29 09:33:39 compute-0 nova_compute[236255]: 2026-01-29 09:33:39.922 236262 DEBUG nova.virt.block_device [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Updating existing volume attachment record: 41a3ddf8-fbcb-47b4-8e98-b1a1b50b688b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 29 09:33:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 29 09:33:40 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 29 09:33:40 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 29 09:33:40 compute-0 ceph-mon[75183]: pgmap v711: 193 pgs: 193 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 15 KiB/s wr, 147 op/s
Jan 29 09:33:40 compute-0 ceph-mon[75183]: osdmap e70: 3 total, 3 up, 3 in
Jan 29 09:33:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v714: 193 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 187 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.8 KiB/s wr, 63 op/s
Jan 29 09:33:41 compute-0 ceph-mon[75183]: osdmap e71: 3 total, 3 up, 3 in
Jan 29 09:33:41 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 09:33:41 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2978253327' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 09:33:41 compute-0 sshd-session[239794]: Connection closed by authenticating user root 204.48.25.113 port 38566 [preauth]
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.556 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.557 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.590 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.591 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.591 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.609 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.755 236262 DEBUG nova.compute.manager [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.758 236262 DEBUG nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.759 236262 INFO nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Creating image(s)
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.759 236262 DEBUG nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.760 236262 DEBUG nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Ensure instance console log exists: /var/lib/nova/instances/91dbe8b1-3e4d-436e-966e-05af44b988c4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.760 236262 DEBUG oslo_concurrency.lockutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.761 236262 DEBUG oslo_concurrency.lockutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.761 236262 DEBUG oslo_concurrency.lockutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.764 236262 DEBUG nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d774a831-6191-4c45-a2c6-9de804ae8122', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd774a831-6191-4c45-a2c6-9de804ae8122', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '91dbe8b1-3e4d-436e-966e-05af44b988c4', 'attached_at': '', 'detached_at': '', 'volume_id': 'd774a831-6191-4c45-a2c6-9de804ae8122', 'serial': 'd774a831-6191-4c45-a2c6-9de804ae8122'}, 'attachment_id': '41a3ddf8-fbcb-47b4-8e98-b1a1b50b688b', 'device_type': 'disk', 'delete_on_termination': True, 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.771 236262 WARNING nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.785 236262 DEBUG nova.virt.libvirt.host [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.786 236262 DEBUG nova.virt.libvirt.host [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.792 236262 DEBUG nova.virt.libvirt.host [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.793 236262 DEBUG nova.virt.libvirt.host [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.793 236262 DEBUG nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.794 236262 DEBUG nova.virt.hardware [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T09:32:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ad386bd5-2fc0-4f1d-beae-1b6bc4422bba',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.794 236262 DEBUG nova.virt.hardware [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.795 236262 DEBUG nova.virt.hardware [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.795 236262 DEBUG nova.virt.hardware [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.795 236262 DEBUG nova.virt.hardware [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.795 236262 DEBUG nova.virt.hardware [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.796 236262 DEBUG nova.virt.hardware [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.796 236262 DEBUG nova.virt.hardware [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.796 236262 DEBUG nova.virt.hardware [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.797 236262 DEBUG nova.virt.hardware [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.797 236262 DEBUG nova.virt.hardware [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.821 236262 DEBUG nova.storage.rbd_utils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] rbd image 91dbe8b1-3e4d-436e-966e-05af44b988c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.829 236262 DEBUG nova.privsep.utils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 29 09:33:41 compute-0 nova_compute[236255]: 2026-01-29 09:33:41.830 236262 DEBUG oslo_concurrency.processutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:33:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 29 09:33:42 compute-0 ceph-mon[75183]: pgmap v714: 193 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 187 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.8 KiB/s wr, 63 op/s
Jan 29 09:33:42 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/2978253327' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 09:33:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 29 09:33:42 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 29 09:33:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 09:33:42 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1037822468' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 09:33:42 compute-0 nova_compute[236255]: 2026-01-29 09:33:42.367 236262 DEBUG oslo_concurrency.processutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:33:42 compute-0 nova_compute[236255]: 2026-01-29 09:33:42.368 236262 DEBUG oslo_concurrency.lockutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:33:42 compute-0 nova_compute[236255]: 2026-01-29 09:33:42.369 236262 DEBUG oslo_concurrency.lockutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:33:42 compute-0 nova_compute[236255]: 2026-01-29 09:33:42.369 236262 DEBUG oslo_concurrency.lockutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:33:42 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 29 09:33:42 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 29 09:33:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:33:42 compute-0 nova_compute[236255]: 2026-01-29 09:33:42.623 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:33:42 compute-0 nova_compute[236255]: 2026-01-29 09:33:42.624 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 29 09:33:42 compute-0 nova_compute[236255]: 2026-01-29 09:33:42.624 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:33:42 compute-0 nova_compute[236255]: 2026-01-29 09:33:42.698 236262 DEBUG nova.objects.instance [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 91dbe8b1-3e4d-436e-966e-05af44b988c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 29 09:33:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v716: 193 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 187 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 6.5 KiB/s wr, 71 op/s
Jan 29 09:33:43 compute-0 ceph-mon[75183]: osdmap e72: 3 total, 3 up, 3 in
Jan 29 09:33:43 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1037822468' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 09:33:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 29 09:33:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 29 09:33:44 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 29 09:33:44 compute-0 ceph-mon[75183]: pgmap v716: 193 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 187 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 6.5 KiB/s wr, 71 op/s
Jan 29 09:33:44 compute-0 nova_compute[236255]: 2026-01-29 09:33:44.466 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:33:44 compute-0 nova_compute[236255]: 2026-01-29 09:33:44.466 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:33:44 compute-0 nova_compute[236255]: 2026-01-29 09:33:44.467 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:33:44 compute-0 nova_compute[236255]: 2026-01-29 09:33:44.467 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 29 09:33:44 compute-0 nova_compute[236255]: 2026-01-29 09:33:44.468 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:33:44 compute-0 nova_compute[236255]: 2026-01-29 09:33:44.487 236262 DEBUG nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] End _get_guest_xml xml=<domain type="kvm">
Jan 29 09:33:44 compute-0 nova_compute[236255]:   <uuid>91dbe8b1-3e4d-436e-966e-05af44b988c4</uuid>
Jan 29 09:33:44 compute-0 nova_compute[236255]:   <name>instance-00000001</name>
Jan 29 09:33:44 compute-0 nova_compute[236255]:   <memory>131072</memory>
Jan 29 09:33:44 compute-0 nova_compute[236255]:   <vcpu>1</vcpu>
Jan 29 09:33:44 compute-0 nova_compute[236255]:   <metadata>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <nova:name>instance-depend-image</nova:name>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <nova:creationTime>2026-01-29 09:33:41</nova:creationTime>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <nova:flavor name="m1.nano">
Jan 29 09:33:44 compute-0 nova_compute[236255]:         <nova:memory>128</nova:memory>
Jan 29 09:33:44 compute-0 nova_compute[236255]:         <nova:disk>1</nova:disk>
Jan 29 09:33:44 compute-0 nova_compute[236255]:         <nova:swap>0</nova:swap>
Jan 29 09:33:44 compute-0 nova_compute[236255]:         <nova:ephemeral>0</nova:ephemeral>
Jan 29 09:33:44 compute-0 nova_compute[236255]:         <nova:vcpus>1</nova:vcpus>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       </nova:flavor>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <nova:owner>
Jan 29 09:33:44 compute-0 nova_compute[236255]:         <nova:user uuid="6ec470a589594a43a020f2568556969f">tempest-ImageDependencyTests-1234164529-project-member</nova:user>
Jan 29 09:33:44 compute-0 nova_compute[236255]:         <nova:project uuid="401d3ff8369e48ee9848bf6e778112a3">tempest-ImageDependencyTests-1234164529</nova:project>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       </nova:owner>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <nova:ports/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     </nova:instance>
Jan 29 09:33:44 compute-0 nova_compute[236255]:   </metadata>
Jan 29 09:33:44 compute-0 nova_compute[236255]:   <sysinfo type="smbios">
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <system>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <entry name="manufacturer">RDO</entry>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <entry name="product">OpenStack Compute</entry>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <entry name="serial">91dbe8b1-3e4d-436e-966e-05af44b988c4</entry>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <entry name="uuid">91dbe8b1-3e4d-436e-966e-05af44b988c4</entry>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <entry name="family">Virtual Machine</entry>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     </system>
Jan 29 09:33:44 compute-0 nova_compute[236255]:   </sysinfo>
Jan 29 09:33:44 compute-0 nova_compute[236255]:   <os>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <boot dev="hd"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <smbios mode="sysinfo"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:   </os>
Jan 29 09:33:44 compute-0 nova_compute[236255]:   <features>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <acpi/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <apic/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <vmcoreinfo/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:   </features>
Jan 29 09:33:44 compute-0 nova_compute[236255]:   <clock offset="utc">
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <timer name="pit" tickpolicy="delay"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <timer name="hpet" present="no"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:   </clock>
Jan 29 09:33:44 compute-0 nova_compute[236255]:   <cpu mode="host-model" match="exact">
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <topology sockets="1" cores="1" threads="1"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:   </cpu>
Jan 29 09:33:44 compute-0 nova_compute[236255]:   <devices>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <disk type="network" device="cdrom">
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <driver type="raw" cache="none"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <source protocol="rbd" name="vms/91dbe8b1-3e4d-436e-966e-05af44b988c4_disk.config">
Jan 29 09:33:44 compute-0 nova_compute[236255]:         <host name="192.168.122.100" port="6789"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       </source>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <auth username="openstack">
Jan 29 09:33:44 compute-0 nova_compute[236255]:         <secret type="ceph" uuid="3fdce3ca-565d-5459-88e8-1ffe58b48437"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       </auth>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <target dev="sda" bus="sata"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     </disk>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <disk type="network" device="disk">
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <source protocol="rbd" name="volumes/volume-d774a831-6191-4c45-a2c6-9de804ae8122">
Jan 29 09:33:44 compute-0 nova_compute[236255]:         <host name="192.168.122.100" port="6789"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       </source>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <auth username="openstack">
Jan 29 09:33:44 compute-0 nova_compute[236255]:         <secret type="ceph" uuid="3fdce3ca-565d-5459-88e8-1ffe58b48437"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       </auth>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <target dev="vda" bus="virtio"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <serial>d774a831-6191-4c45-a2c6-9de804ae8122</serial>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     </disk>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <serial type="pty">
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <log file="/var/lib/nova/instances/91dbe8b1-3e4d-436e-966e-05af44b988c4/console.log" append="off"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     </serial>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <video>
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <model type="virtio"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     </video>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <input type="tablet" bus="usb"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <rng model="virtio">
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <backend model="random">/dev/urandom</backend>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     </rng>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <controller type="usb" index="0"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     <memballoon model="virtio">
Jan 29 09:33:44 compute-0 nova_compute[236255]:       <stats period="10"/>
Jan 29 09:33:44 compute-0 nova_compute[236255]:     </memballoon>
Jan 29 09:33:44 compute-0 nova_compute[236255]:   </devices>
Jan 29 09:33:44 compute-0 nova_compute[236255]: </domain>
Jan 29 09:33:44 compute-0 nova_compute[236255]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 29 09:33:44 compute-0 nova_compute[236255]: 2026-01-29 09:33:44.536 236262 DEBUG nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 29 09:33:44 compute-0 nova_compute[236255]: 2026-01-29 09:33:44.536 236262 DEBUG nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 29 09:33:44 compute-0 nova_compute[236255]: 2026-01-29 09:33:44.537 236262 INFO nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Using config drive
Jan 29 09:33:44 compute-0 nova_compute[236255]: 2026-01-29 09:33:44.565 236262 DEBUG nova.storage.rbd_utils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] rbd image 91dbe8b1-3e4d-436e-966e-05af44b988c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 29 09:33:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v718: 193 pgs: 193 active+clean; 41 MiB data, 140 MiB used, 60 GiB / 60 GiB avail; 143 KiB/s rd, 13 KiB/s wr, 198 op/s
Jan 29 09:33:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:33:44 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3927320113' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:33:45 compute-0 nova_compute[236255]: 2026-01-29 09:33:45.000 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:33:45 compute-0 nova_compute[236255]: 2026-01-29 09:33:45.076 236262 DEBUG nova.virt.libvirt.driver [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 29 09:33:45 compute-0 nova_compute[236255]: 2026-01-29 09:33:45.077 236262 DEBUG nova.virt.libvirt.driver [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 29 09:33:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 29 09:33:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 29 09:33:45 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 29 09:33:45 compute-0 ceph-mon[75183]: osdmap e73: 3 total, 3 up, 3 in
Jan 29 09:33:45 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3927320113' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:33:45 compute-0 nova_compute[236255]: 2026-01-29 09:33:45.254 236262 WARNING nova.virt.libvirt.driver [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 29 09:33:45 compute-0 nova_compute[236255]: 2026-01-29 09:33:45.255 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5225MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 29 09:33:45 compute-0 nova_compute[236255]: 2026-01-29 09:33:45.255 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:33:45 compute-0 nova_compute[236255]: 2026-01-29 09:33:45.255 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:33:45 compute-0 nova_compute[236255]: 2026-01-29 09:33:45.322 236262 INFO nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Creating config drive at /var/lib/nova/instances/91dbe8b1-3e4d-436e-966e-05af44b988c4/disk.config
Jan 29 09:33:45 compute-0 nova_compute[236255]: 2026-01-29 09:33:45.327 236262 DEBUG oslo_concurrency.processutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91dbe8b1-3e4d-436e-966e-05af44b988c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9tnz2ebw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:33:45 compute-0 nova_compute[236255]: 2026-01-29 09:33:45.452 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Instance 91dbe8b1-3e4d-436e-966e-05af44b988c4 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 29 09:33:45 compute-0 nova_compute[236255]: 2026-01-29 09:33:45.453 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 29 09:33:45 compute-0 nova_compute[236255]: 2026-01-29 09:33:45.453 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 29 09:33:45 compute-0 nova_compute[236255]: 2026-01-29 09:33:45.460 236262 DEBUG oslo_concurrency.processutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91dbe8b1-3e4d-436e-966e-05af44b988c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9tnz2ebw" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:33:45 compute-0 nova_compute[236255]: 2026-01-29 09:33:45.481 236262 DEBUG nova.storage.rbd_utils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] rbd image 91dbe8b1-3e4d-436e-966e-05af44b988c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 29 09:33:45 compute-0 nova_compute[236255]: 2026-01-29 09:33:45.484 236262 DEBUG oslo_concurrency.processutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/91dbe8b1-3e4d-436e-966e-05af44b988c4/disk.config 91dbe8b1-3e4d-436e-966e-05af44b988c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:33:45 compute-0 nova_compute[236255]: 2026-01-29 09:33:45.612 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Refreshing inventories for resource provider 2689825d-8fa0-473a-adf1-5005faba9bec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 29 09:33:45 compute-0 nova_compute[236255]: 2026-01-29 09:33:45.742 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Updating ProviderTree inventory for provider 2689825d-8fa0-473a-adf1-5005faba9bec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 29 09:33:45 compute-0 nova_compute[236255]: 2026-01-29 09:33:45.742 236262 DEBUG nova.compute.provider_tree [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Updating inventory in ProviderTree for provider 2689825d-8fa0-473a-adf1-5005faba9bec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 29 09:33:45 compute-0 nova_compute[236255]: 2026-01-29 09:33:45.761 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Refreshing aggregate associations for resource provider 2689825d-8fa0-473a-adf1-5005faba9bec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 29 09:33:45 compute-0 nova_compute[236255]: 2026-01-29 09:33:45.799 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Refreshing trait associations for resource provider 2689825d-8fa0-473a-adf1-5005faba9bec, traits: HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 29 09:33:45 compute-0 nova_compute[236255]: 2026-01-29 09:33:45.844 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:33:46 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 29 09:33:46 compute-0 ceph-mon[75183]: pgmap v718: 193 pgs: 193 active+clean; 41 MiB data, 140 MiB used, 60 GiB / 60 GiB avail; 143 KiB/s rd, 13 KiB/s wr, 198 op/s
Jan 29 09:33:46 compute-0 ceph-mon[75183]: osdmap e74: 3 total, 3 up, 3 in
Jan 29 09:33:46 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 29 09:33:46 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 29 09:33:46 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:33:46 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3823695367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:33:46 compute-0 nova_compute[236255]: 2026-01-29 09:33:46.383 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:33:46 compute-0 nova_compute[236255]: 2026-01-29 09:33:46.390 236262 DEBUG nova.compute.provider_tree [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed in ProviderTree for provider: 2689825d-8fa0-473a-adf1-5005faba9bec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 29 09:33:46 compute-0 nova_compute[236255]: 2026-01-29 09:33:46.419 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed for provider 2689825d-8fa0-473a-adf1-5005faba9bec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 29 09:33:46 compute-0 nova_compute[236255]: 2026-01-29 09:33:46.453 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 29 09:33:46 compute-0 nova_compute[236255]: 2026-01-29 09:33:46.454 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:33:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v721: 193 pgs: 193 active+clean; 41 MiB data, 140 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 9.0 KiB/s wr, 168 op/s
Jan 29 09:33:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 29 09:33:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 29 09:33:47 compute-0 ceph-mon[75183]: osdmap e75: 3 total, 3 up, 3 in
Jan 29 09:33:47 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3823695367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:33:47 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 29 09:33:47 compute-0 nova_compute[236255]: 2026-01-29 09:33:47.307 236262 DEBUG oslo_concurrency.processutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/91dbe8b1-3e4d-436e-966e-05af44b988c4/disk.config 91dbe8b1-3e4d-436e-966e-05af44b988c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.823s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:33:47 compute-0 nova_compute[236255]: 2026-01-29 09:33:47.308 236262 INFO nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Deleting local config drive /var/lib/nova/instances/91dbe8b1-3e4d-436e-966e-05af44b988c4/disk.config because it was imported into RBD.
Jan 29 09:33:47 compute-0 nova_compute[236255]: 2026-01-29 09:33:47.382 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:33:47 compute-0 nova_compute[236255]: 2026-01-29 09:33:47.383 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:33:47 compute-0 nova_compute[236255]: 2026-01-29 09:33:47.383 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 29 09:33:47 compute-0 nova_compute[236255]: 2026-01-29 09:33:47.384 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 29 09:33:47 compute-0 systemd-machined[204395]: New machine qemu-1-instance-00000001.
Jan 29 09:33:47 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 29 09:33:47 compute-0 nova_compute[236255]: 2026-01-29 09:33:47.416 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 29 09:33:47 compute-0 nova_compute[236255]: 2026-01-29 09:33:47.417 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 29 09:33:47 compute-0 nova_compute[236255]: 2026-01-29 09:33:47.418 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:33:47 compute-0 nova_compute[236255]: 2026-01-29 09:33:47.418 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:33:47 compute-0 nova_compute[236255]: 2026-01-29 09:33:47.418 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:33:47 compute-0 nova_compute[236255]: 2026-01-29 09:33:47.419 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:33:47 compute-0 nova_compute[236255]: 2026-01-29 09:33:47.419 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:33:47 compute-0 podman[239963]: 2026-01-29 09:33:47.489723688 +0000 UTC m=+0.107901491 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 29 09:33:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:33:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 29 09:33:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 29 09:33:47 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.014 236262 DEBUG nova.virt.driver [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Emitting event <LifecycleEvent: 1769679228.0132883, 91dbe8b1-3e4d-436e-966e-05af44b988c4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.017 236262 INFO nova.compute.manager [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] VM Resumed (Lifecycle Event)
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.021 236262 DEBUG nova.compute.manager [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.022 236262 DEBUG nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.027 236262 INFO nova.virt.libvirt.driver [-] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Instance spawned successfully.
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.027 236262 DEBUG nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.057 236262 DEBUG nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.057 236262 DEBUG nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.058 236262 DEBUG nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.058 236262 DEBUG nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.058 236262 DEBUG nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.059 236262 DEBUG nova.virt.libvirt.driver [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.062 236262 DEBUG nova.compute.manager [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.065 236262 DEBUG nova.compute.manager [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.096 236262 INFO nova.compute.manager [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.097 236262 DEBUG nova.virt.driver [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Emitting event <LifecycleEvent: 1769679228.01587, 91dbe8b1-3e4d-436e-966e-05af44b988c4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.097 236262 INFO nova.compute.manager [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] VM Started (Lifecycle Event)
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.123 236262 DEBUG nova.compute.manager [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.127 236262 DEBUG nova.compute.manager [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.137 236262 INFO nova.compute.manager [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Took 6.38 seconds to spawn the instance on the hypervisor.
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.138 236262 DEBUG nova.compute.manager [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.146 236262 INFO nova.compute.manager [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 29 09:33:48 compute-0 ceph-mon[75183]: pgmap v721: 193 pgs: 193 active+clean; 41 MiB data, 140 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 9.0 KiB/s wr, 168 op/s
Jan 29 09:33:48 compute-0 ceph-mon[75183]: osdmap e76: 3 total, 3 up, 3 in
Jan 29 09:33:48 compute-0 ceph-mon[75183]: osdmap e77: 3 total, 3 up, 3 in
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.225 236262 INFO nova.compute.manager [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Took 10.47 seconds to build instance.
Jan 29 09:33:48 compute-0 nova_compute[236255]: 2026-01-29 09:33:48.251 236262 DEBUG oslo_concurrency.lockutils [None req-1796c8d8-d745-49ec-810f-7181efa4c13b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "91dbe8b1-3e4d-436e-966e-05af44b988c4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:33:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v724: 193 pgs: 193 active+clean; 41 MiB data, 140 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:33:50 compute-0 podman[240040]: 2026-01-29 09:33:50.125230811 +0000 UTC m=+0.064407160 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 29 09:33:50 compute-0 ceph-mon[75183]: pgmap v724: 193 pgs: 193 active+clean; 41 MiB data, 140 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:33:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v725: 193 pgs: 193 active+clean; 41 MiB data, 140 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 32 KiB/s wr, 118 op/s
Jan 29 09:33:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 29 09:33:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 29 09:33:51 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 29 09:33:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 09:33:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3192049803' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:33:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 09:33:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3192049803' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:33:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 29 09:33:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 29 09:33:52 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 29 09:33:52 compute-0 ceph-mon[75183]: pgmap v725: 193 pgs: 193 active+clean; 41 MiB data, 140 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 32 KiB/s wr, 118 op/s
Jan 29 09:33:52 compute-0 ceph-mon[75183]: osdmap e78: 3 total, 3 up, 3 in
Jan 29 09:33:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/3192049803' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:33:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/3192049803' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:33:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:33:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 29 09:33:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 29 09:33:52 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 29 09:33:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v729: 193 pgs: 193 active+clean; 41 MiB data, 140 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 35 KiB/s wr, 127 op/s
Jan 29 09:33:53 compute-0 ceph-mon[75183]: osdmap e79: 3 total, 3 up, 3 in
Jan 29 09:33:53 compute-0 ceph-mon[75183]: osdmap e80: 3 total, 3 up, 3 in
Jan 29 09:33:54 compute-0 ceph-mon[75183]: pgmap v729: 193 pgs: 193 active+clean; 41 MiB data, 140 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 35 KiB/s wr, 127 op/s
Jan 29 09:33:54 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 29 09:33:54 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 29 09:33:54 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 29 09:33:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v731: 193 pgs: 193 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 5.5 KiB/s wr, 148 op/s
Jan 29 09:33:55 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 29 09:33:55 compute-0 ceph-mon[75183]: osdmap e81: 3 total, 3 up, 3 in
Jan 29 09:33:55 compute-0 ceph-mon[75183]: pgmap v731: 193 pgs: 193 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 5.5 KiB/s wr, 148 op/s
Jan 29 09:33:55 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 29 09:33:55 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:33:56
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['images', 'backups', 'vms', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'cephfs.cephfs.data']
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:33:56 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 29 09:33:56 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 29 09:33:56 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 29 09:33:56 compute-0 ceph-mon[75183]: osdmap e82: 3 total, 3 up, 3 in
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:33:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v734: 193 pgs: 193 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 5.2 KiB/s wr, 141 op/s
Jan 29 09:33:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:33:57 compute-0 ceph-mon[75183]: osdmap e83: 3 total, 3 up, 3 in
Jan 29 09:33:57 compute-0 ceph-mon[75183]: pgmap v734: 193 pgs: 193 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 5.2 KiB/s wr, 141 op/s
Jan 29 09:33:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v735: 193 pgs: 193 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 3.7 KiB/s wr, 99 op/s
Jan 29 09:33:58 compute-0 sudo[240060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:33:58 compute-0 sudo[240060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:33:58 compute-0 sudo[240060]: pam_unix(sudo:session): session closed for user root
Jan 29 09:33:58 compute-0 sudo[240085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 29 09:33:58 compute-0 sudo[240085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:33:59 compute-0 nova_compute[236255]: 2026-01-29 09:33:59.131 236262 DEBUG oslo_concurrency.lockutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Acquiring lock "b6002447-8ee9-4e0e-86cd-1e7121b5e4e0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:33:59 compute-0 nova_compute[236255]: 2026-01-29 09:33:59.132 236262 DEBUG oslo_concurrency.lockutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "b6002447-8ee9-4e0e-86cd-1e7121b5e4e0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:33:59 compute-0 nova_compute[236255]: 2026-01-29 09:33:59.183 236262 DEBUG nova.compute.manager [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 29 09:33:59 compute-0 sudo[240085]: pam_unix(sudo:session): session closed for user root
Jan 29 09:33:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:33:59 compute-0 nova_compute[236255]: 2026-01-29 09:33:59.277 236262 DEBUG oslo_concurrency.lockutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:33:59 compute-0 nova_compute[236255]: 2026-01-29 09:33:59.277 236262 DEBUG oslo_concurrency.lockutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:33:59 compute-0 nova_compute[236255]: 2026-01-29 09:33:59.284 236262 DEBUG nova.virt.hardware [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 29 09:33:59 compute-0 nova_compute[236255]: 2026-01-29 09:33:59.284 236262 INFO nova.compute.claims [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Claim successful on node compute-0.ctlplane.example.com
Jan 29 09:33:59 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:33:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:33:59 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:33:59 compute-0 sudo[240129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:33:59 compute-0 sudo[240129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:33:59 compute-0 sudo[240129]: pam_unix(sudo:session): session closed for user root
Jan 29 09:33:59 compute-0 nova_compute[236255]: 2026-01-29 09:33:59.406 236262 DEBUG oslo_concurrency.processutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:33:59 compute-0 sudo[240154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:33:59 compute-0 sudo[240154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:33:59 compute-0 sudo[240154]: pam_unix(sudo:session): session closed for user root
Jan 29 09:33:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:33:59 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:33:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:33:59 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:33:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:33:59 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:33:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:33:59 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:33:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:33:59 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:33:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:33:59 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:33:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:33:59 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/180603622' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:33:59 compute-0 sudo[240231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:33:59 compute-0 sudo[240231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:33:59 compute-0 sudo[240231]: pam_unix(sudo:session): session closed for user root
Jan 29 09:33:59 compute-0 nova_compute[236255]: 2026-01-29 09:33:59.974 236262 DEBUG oslo_concurrency.processutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:33:59 compute-0 nova_compute[236255]: 2026-01-29 09:33:59.980 236262 DEBUG nova.compute.provider_tree [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Inventory has not changed in ProviderTree for provider: 2689825d-8fa0-473a-adf1-5005faba9bec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 29 09:33:59 compute-0 nova_compute[236255]: 2026-01-29 09:33:59.993 236262 DEBUG nova.scheduler.client.report [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Inventory has not changed for provider 2689825d-8fa0-473a-adf1-5005faba9bec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 29 09:34:00 compute-0 sudo[240258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:34:00 compute-0 sudo[240258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:34:00 compute-0 nova_compute[236255]: 2026-01-29 09:34:00.013 236262 DEBUG oslo_concurrency.lockutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:34:00 compute-0 nova_compute[236255]: 2026-01-29 09:34:00.014 236262 DEBUG nova.compute.manager [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 29 09:34:00 compute-0 nova_compute[236255]: 2026-01-29 09:34:00.058 236262 DEBUG nova.compute.manager [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 29 09:34:00 compute-0 nova_compute[236255]: 2026-01-29 09:34:00.059 236262 DEBUG nova.network.neutron [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 29 09:34:00 compute-0 nova_compute[236255]: 2026-01-29 09:34:00.084 236262 INFO nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 29 09:34:00 compute-0 nova_compute[236255]: 2026-01-29 09:34:00.102 236262 DEBUG nova.compute.manager [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 29 09:34:00 compute-0 nova_compute[236255]: 2026-01-29 09:34:00.222 236262 DEBUG nova.compute.manager [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 29 09:34:00 compute-0 nova_compute[236255]: 2026-01-29 09:34:00.223 236262 DEBUG nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 29 09:34:00 compute-0 nova_compute[236255]: 2026-01-29 09:34:00.224 236262 INFO nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Creating image(s)
Jan 29 09:34:00 compute-0 nova_compute[236255]: 2026-01-29 09:34:00.252 236262 DEBUG nova.storage.rbd_utils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] rbd image b6002447-8ee9-4e0e-86cd-1e7121b5e4e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 29 09:34:00 compute-0 nova_compute[236255]: 2026-01-29 09:34:00.280 236262 DEBUG nova.storage.rbd_utils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] rbd image b6002447-8ee9-4e0e-86cd-1e7121b5e4e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 29 09:34:00 compute-0 podman[240295]: 2026-01-29 09:34:00.265557959 +0000 UTC m=+0.032155425 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:34:00 compute-0 nova_compute[236255]: 2026-01-29 09:34:00.396 236262 DEBUG nova.storage.rbd_utils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] rbd image b6002447-8ee9-4e0e-86cd-1e7121b5e4e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 29 09:34:00 compute-0 nova_compute[236255]: 2026-01-29 09:34:00.401 236262 DEBUG oslo_concurrency.lockutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Acquiring lock "ced62c4f017c0fedacf05541dc787c0e4d8e024c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:34:00 compute-0 nova_compute[236255]: 2026-01-29 09:34:00.402 236262 DEBUG oslo_concurrency.lockutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "ced62c4f017c0fedacf05541dc787c0e4d8e024c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:34:00 compute-0 podman[240295]: 2026-01-29 09:34:00.455305441 +0000 UTC m=+0.221902897 container create 38015a586fb9d236751b8fa92b9f24a4573a44d31244daa9cccfe15941a4f5af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_burnell, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 29 09:34:00 compute-0 ceph-mon[75183]: pgmap v735: 193 pgs: 193 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 3.7 KiB/s wr, 99 op/s
Jan 29 09:34:00 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:34:00 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:34:00 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:34:00 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:34:00 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:34:00 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:34:00 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:34:00 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:34:00 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/180603622' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:34:00 compute-0 systemd[1]: Started libpod-conmon-38015a586fb9d236751b8fa92b9f24a4573a44d31244daa9cccfe15941a4f5af.scope.
Jan 29 09:34:00 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:34:00 compute-0 podman[240295]: 2026-01-29 09:34:00.577953662 +0000 UTC m=+0.344551118 container init 38015a586fb9d236751b8fa92b9f24a4573a44d31244daa9cccfe15941a4f5af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:34:00 compute-0 podman[240295]: 2026-01-29 09:34:00.588990361 +0000 UTC m=+0.355587777 container start 38015a586fb9d236751b8fa92b9f24a4573a44d31244daa9cccfe15941a4f5af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_burnell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:34:00 compute-0 quirky_burnell[240365]: 167 167
Jan 29 09:34:00 compute-0 systemd[1]: libpod-38015a586fb9d236751b8fa92b9f24a4573a44d31244daa9cccfe15941a4f5af.scope: Deactivated successfully.
Jan 29 09:34:00 compute-0 podman[240295]: 2026-01-29 09:34:00.596508485 +0000 UTC m=+0.363105921 container attach 38015a586fb9d236751b8fa92b9f24a4573a44d31244daa9cccfe15941a4f5af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 09:34:00 compute-0 podman[240295]: 2026-01-29 09:34:00.59743041 +0000 UTC m=+0.364027826 container died 38015a586fb9d236751b8fa92b9f24a4573a44d31244daa9cccfe15941a4f5af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_burnell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 09:34:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a539088e7c4464d2d28f97cfeda84fb610fe3225e3e2670839914374cb1a84b-merged.mount: Deactivated successfully.
Jan 29 09:34:00 compute-0 ceph-osd[86001]: bluestore.MempoolThread fragmentation_score=0.000120 took=0.000025s
Jan 29 09:34:00 compute-0 ceph-osd[88193]: bluestore.MempoolThread fragmentation_score=0.000134 took=0.001228s
Jan 29 09:34:00 compute-0 ceph-osd[87035]: bluestore.MempoolThread fragmentation_score=0.000119 took=0.000017s
Jan 29 09:34:00 compute-0 nova_compute[236255]: 2026-01-29 09:34:00.641 236262 DEBUG nova.network.neutron [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 29 09:34:00 compute-0 nova_compute[236255]: 2026-01-29 09:34:00.641 236262 DEBUG nova.compute.manager [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 29 09:34:00 compute-0 podman[240295]: 2026-01-29 09:34:00.65487733 +0000 UTC m=+0.421474746 container remove 38015a586fb9d236751b8fa92b9f24a4573a44d31244daa9cccfe15941a4f5af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:34:00 compute-0 systemd[1]: libpod-conmon-38015a586fb9d236751b8fa92b9f24a4573a44d31244daa9cccfe15941a4f5af.scope: Deactivated successfully.
Jan 29 09:34:00 compute-0 nova_compute[236255]: 2026-01-29 09:34:00.763 236262 DEBUG nova.virt.libvirt.imagebackend [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Image locations are: [{'url': 'rbd://3fdce3ca-565d-5459-88e8-1ffe58b48437/images/db2030d5-68ad-4c19-8554-acadfe9ba001/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://3fdce3ca-565d-5459-88e8-1ffe58b48437/images/db2030d5-68ad-4c19-8554-acadfe9ba001/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 29 09:34:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v736: 193 pgs: 193 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 4.2 KiB/s wr, 65 op/s
Jan 29 09:34:00 compute-0 nova_compute[236255]: 2026-01-29 09:34:00.829 236262 DEBUG nova.virt.libvirt.imagebackend [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Selected location: {'url': 'rbd://3fdce3ca-565d-5459-88e8-1ffe58b48437/images/db2030d5-68ad-4c19-8554-acadfe9ba001/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 29 09:34:00 compute-0 nova_compute[236255]: 2026-01-29 09:34:00.831 236262 DEBUG nova.storage.rbd_utils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] cloning images/db2030d5-68ad-4c19-8554-acadfe9ba001@snap to None/b6002447-8ee9-4e0e-86cd-1e7121b5e4e0_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 29 09:34:00 compute-0 podman[240391]: 2026-01-29 09:34:00.851106539 +0000 UTC m=+0.072401187 container create 5f6c8c5702fdd87f16b3ffe64e3a2d916c2236a435be907584ec767d7b0e1721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 09:34:00 compute-0 systemd[1]: Started libpod-conmon-5f6c8c5702fdd87f16b3ffe64e3a2d916c2236a435be907584ec767d7b0e1721.scope.
Jan 29 09:34:00 compute-0 podman[240391]: 2026-01-29 09:34:00.81872455 +0000 UTC m=+0.040019278 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:34:00 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/228399f75d2c42befb072e34e4a7d74d3c56644a34b59a3ef26c7a543e5625f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/228399f75d2c42befb072e34e4a7d74d3c56644a34b59a3ef26c7a543e5625f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/228399f75d2c42befb072e34e4a7d74d3c56644a34b59a3ef26c7a543e5625f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/228399f75d2c42befb072e34e4a7d74d3c56644a34b59a3ef26c7a543e5625f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/228399f75d2c42befb072e34e4a7d74d3c56644a34b59a3ef26c7a543e5625f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:34:00 compute-0 podman[240391]: 2026-01-29 09:34:00.938975745 +0000 UTC m=+0.160270413 container init 5f6c8c5702fdd87f16b3ffe64e3a2d916c2236a435be907584ec767d7b0e1721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hypatia, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 09:34:00 compute-0 podman[240391]: 2026-01-29 09:34:00.947949319 +0000 UTC m=+0.169243967 container start 5f6c8c5702fdd87f16b3ffe64e3a2d916c2236a435be907584ec767d7b0e1721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 29 09:34:00 compute-0 podman[240391]: 2026-01-29 09:34:00.953831808 +0000 UTC m=+0.175126486 container attach 5f6c8c5702fdd87f16b3ffe64e3a2d916c2236a435be907584ec767d7b0e1721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hypatia, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:34:00 compute-0 nova_compute[236255]: 2026-01-29 09:34:00.956 236262 DEBUG oslo_concurrency.lockutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "ced62c4f017c0fedacf05541dc787c0e4d8e024c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.076 236262 DEBUG nova.storage.rbd_utils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] resizing rbd image b6002447-8ee9-4e0e-86cd-1e7121b5e4e0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.131 236262 DEBUG nova.objects.instance [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lazy-loading 'migration_context' on Instance uuid b6002447-8ee9-4e0e-86cd-1e7121b5e4e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.150 236262 DEBUG nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.150 236262 DEBUG nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Ensure instance console log exists: /var/lib/nova/instances/b6002447-8ee9-4e0e-86cd-1e7121b5e4e0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.150 236262 DEBUG oslo_concurrency.lockutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.151 236262 DEBUG oslo_concurrency.lockutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.151 236262 DEBUG oslo_concurrency.lockutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.152 236262 DEBUG nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='1cfe8e6d383a00fa4d8c6e478bca15b4',container_format='bare',created_at=2026-01-29T09:33:56Z,direct_url=<?>,disk_format='raw',id=db2030d5-68ad-4c19-8554-acadfe9ba001,min_disk=0,min_ram=0,name='tempest-image-dependency-test-474489745',owner='401d3ff8369e48ee9848bf6e778112a3',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2026-01-29T09:33:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'encrypted': False, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'image_id': 'db2030d5-68ad-4c19-8554-acadfe9ba001'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.156 236262 WARNING nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.162 236262 DEBUG nova.virt.libvirt.host [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.163 236262 DEBUG nova.virt.libvirt.host [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.166 236262 DEBUG nova.virt.libvirt.host [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.166 236262 DEBUG nova.virt.libvirt.host [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.166 236262 DEBUG nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.166 236262 DEBUG nova.virt.hardware [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T09:32:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ad386bd5-2fc0-4f1d-beae-1b6bc4422bba',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='1cfe8e6d383a00fa4d8c6e478bca15b4',container_format='bare',created_at=2026-01-29T09:33:56Z,direct_url=<?>,disk_format='raw',id=db2030d5-68ad-4c19-8554-acadfe9ba001,min_disk=0,min_ram=0,name='tempest-image-dependency-test-474489745',owner='401d3ff8369e48ee9848bf6e778112a3',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2026-01-29T09:33:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.167 236262 DEBUG nova.virt.hardware [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.167 236262 DEBUG nova.virt.hardware [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.167 236262 DEBUG nova.virt.hardware [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.168 236262 DEBUG nova.virt.hardware [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.168 236262 DEBUG nova.virt.hardware [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.168 236262 DEBUG nova.virt.hardware [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.168 236262 DEBUG nova.virt.hardware [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.168 236262 DEBUG nova.virt.hardware [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.168 236262 DEBUG nova.virt.hardware [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.169 236262 DEBUG nova.virt.hardware [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.171 236262 DEBUG oslo_concurrency.processutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:34:01 compute-0 condescending_hypatia[240473]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:34:01 compute-0 condescending_hypatia[240473]: --> All data devices are unavailable
Jan 29 09:34:01 compute-0 systemd[1]: libpod-5f6c8c5702fdd87f16b3ffe64e3a2d916c2236a435be907584ec767d7b0e1721.scope: Deactivated successfully.
Jan 29 09:34:01 compute-0 podman[240588]: 2026-01-29 09:34:01.426194645 +0000 UTC m=+0.026908932 container died 5f6c8c5702fdd87f16b3ffe64e3a2d916c2236a435be907584ec767d7b0e1721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hypatia, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:34:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-228399f75d2c42befb072e34e4a7d74d3c56644a34b59a3ef26c7a543e5625f9-merged.mount: Deactivated successfully.
Jan 29 09:34:01 compute-0 ceph-mon[75183]: pgmap v736: 193 pgs: 193 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 4.2 KiB/s wr, 65 op/s
Jan 29 09:34:01 compute-0 podman[240588]: 2026-01-29 09:34:01.472506912 +0000 UTC m=+0.073221169 container remove 5f6c8c5702fdd87f16b3ffe64e3a2d916c2236a435be907584ec767d7b0e1721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hypatia, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Jan 29 09:34:01 compute-0 systemd[1]: libpod-conmon-5f6c8c5702fdd87f16b3ffe64e3a2d916c2236a435be907584ec767d7b0e1721.scope: Deactivated successfully.
Jan 29 09:34:01 compute-0 sudo[240258]: pam_unix(sudo:session): session closed for user root
Jan 29 09:34:01 compute-0 sudo[240604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:34:01 compute-0 sudo[240604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:34:01 compute-0 sudo[240604]: pam_unix(sudo:session): session closed for user root
Jan 29 09:34:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:34:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:34:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:34:01 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 09:34:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:34:01 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1774130047' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 09:34:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.5029215657742855e-06 of space, bias 1.0, pg target 0.0007508764697322857 quantized to 32 (current 32)
Jan 29 09:34:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:34:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 4.7014585081282917e-07 of space, bias 1.0, pg target 0.00014104375524384875 quantized to 32 (current 32)
Jan 29 09:34:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:34:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:34:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:34:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006681132100790811 of space, bias 1.0, pg target 0.20043396302372432 quantized to 32 (current 32)
Jan 29 09:34:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:34:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3871033644857852e-06 of space, bias 4.0, pg target 0.0016645240373829421 quantized to 16 (current 32)
Jan 29 09:34:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:34:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.664 236262 DEBUG oslo_concurrency.processutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:34:01 compute-0 sudo[240629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:34:01 compute-0 sudo[240629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.697 236262 DEBUG nova.storage.rbd_utils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] rbd image b6002447-8ee9-4e0e-86cd-1e7121b5e4e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 29 09:34:01 compute-0 nova_compute[236255]: 2026-01-29 09:34:01.702 236262 DEBUG oslo_concurrency.processutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:34:01 compute-0 podman[240706]: 2026-01-29 09:34:01.942933436 +0000 UTC m=+0.034197509 container create 29c6a4b79c6f6fff32d9fc7b093e8cf91a668ec333e44b5d63e01c74ae381205 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 29 09:34:01 compute-0 systemd[1]: Started libpod-conmon-29c6a4b79c6f6fff32d9fc7b093e8cf91a668ec333e44b5d63e01c74ae381205.scope.
Jan 29 09:34:01 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:34:01 compute-0 podman[240706]: 2026-01-29 09:34:01.995771331 +0000 UTC m=+0.087035384 container init 29c6a4b79c6f6fff32d9fc7b093e8cf91a668ec333e44b5d63e01c74ae381205 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_hodgkin, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 29 09:34:02 compute-0 podman[240706]: 2026-01-29 09:34:02.001103626 +0000 UTC m=+0.092367689 container start 29c6a4b79c6f6fff32d9fc7b093e8cf91a668ec333e44b5d63e01c74ae381205 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_hodgkin, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 09:34:02 compute-0 happy_hodgkin[240722]: 167 167
Jan 29 09:34:02 compute-0 systemd[1]: libpod-29c6a4b79c6f6fff32d9fc7b093e8cf91a668ec333e44b5d63e01c74ae381205.scope: Deactivated successfully.
Jan 29 09:34:02 compute-0 podman[240706]: 2026-01-29 09:34:02.00678424 +0000 UTC m=+0.098048313 container attach 29c6a4b79c6f6fff32d9fc7b093e8cf91a668ec333e44b5d63e01c74ae381205 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_hodgkin, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 29 09:34:02 compute-0 podman[240706]: 2026-01-29 09:34:02.007104169 +0000 UTC m=+0.098368222 container died 29c6a4b79c6f6fff32d9fc7b093e8cf91a668ec333e44b5d63e01c74ae381205 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_hodgkin, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 09:34:02 compute-0 podman[240706]: 2026-01-29 09:34:01.927430125 +0000 UTC m=+0.018694178 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:34:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-71764b6c54d813c31f539d8538e143cc93e4252373b80068f63614e2539588da-merged.mount: Deactivated successfully.
Jan 29 09:34:02 compute-0 podman[240706]: 2026-01-29 09:34:02.049287784 +0000 UTC m=+0.140551857 container remove 29c6a4b79c6f6fff32d9fc7b093e8cf91a668ec333e44b5d63e01c74ae381205 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:34:02 compute-0 systemd[1]: libpod-conmon-29c6a4b79c6f6fff32d9fc7b093e8cf91a668ec333e44b5d63e01c74ae381205.scope: Deactivated successfully.
Jan 29 09:34:02 compute-0 podman[240746]: 2026-01-29 09:34:02.18318278 +0000 UTC m=+0.037949861 container create 1a99244313a6291b3a997e4f19cdcf57d86820bf965414d81e9040c406169e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 29 09:34:02 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 09:34:02 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/713305800' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 09:34:02 compute-0 systemd[1]: Started libpod-conmon-1a99244313a6291b3a997e4f19cdcf57d86820bf965414d81e9040c406169e56.scope.
Jan 29 09:34:02 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:34:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0522cc326d0038656994f960e6af4a18d4db5fda27a23c2435978ae814bc2e95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:34:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0522cc326d0038656994f960e6af4a18d4db5fda27a23c2435978ae814bc2e95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:34:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0522cc326d0038656994f960e6af4a18d4db5fda27a23c2435978ae814bc2e95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:34:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0522cc326d0038656994f960e6af4a18d4db5fda27a23c2435978ae814bc2e95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:34:02 compute-0 nova_compute[236255]: 2026-01-29 09:34:02.239 236262 DEBUG oslo_concurrency.processutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:34:02 compute-0 nova_compute[236255]: 2026-01-29 09:34:02.242 236262 DEBUG nova.objects.instance [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid b6002447-8ee9-4e0e-86cd-1e7121b5e4e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 29 09:34:02 compute-0 podman[240746]: 2026-01-29 09:34:02.164510673 +0000 UTC m=+0.019277774 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:34:02 compute-0 nova_compute[236255]: 2026-01-29 09:34:02.268 236262 DEBUG nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] End _get_guest_xml xml=<domain type="kvm">
Jan 29 09:34:02 compute-0 nova_compute[236255]:   <uuid>b6002447-8ee9-4e0e-86cd-1e7121b5e4e0</uuid>
Jan 29 09:34:02 compute-0 nova_compute[236255]:   <name>instance-00000002</name>
Jan 29 09:34:02 compute-0 nova_compute[236255]:   <memory>131072</memory>
Jan 29 09:34:02 compute-0 nova_compute[236255]:   <vcpu>1</vcpu>
Jan 29 09:34:02 compute-0 nova_compute[236255]:   <metadata>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <nova:name>instance-depend-image</nova:name>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <nova:creationTime>2026-01-29 09:34:01</nova:creationTime>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <nova:flavor name="m1.nano">
Jan 29 09:34:02 compute-0 nova_compute[236255]:         <nova:memory>128</nova:memory>
Jan 29 09:34:02 compute-0 nova_compute[236255]:         <nova:disk>1</nova:disk>
Jan 29 09:34:02 compute-0 nova_compute[236255]:         <nova:swap>0</nova:swap>
Jan 29 09:34:02 compute-0 nova_compute[236255]:         <nova:ephemeral>0</nova:ephemeral>
Jan 29 09:34:02 compute-0 nova_compute[236255]:         <nova:vcpus>1</nova:vcpus>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       </nova:flavor>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <nova:owner>
Jan 29 09:34:02 compute-0 nova_compute[236255]:         <nova:user uuid="6ec470a589594a43a020f2568556969f">tempest-ImageDependencyTests-1234164529-project-member</nova:user>
Jan 29 09:34:02 compute-0 nova_compute[236255]:         <nova:project uuid="401d3ff8369e48ee9848bf6e778112a3">tempest-ImageDependencyTests-1234164529</nova:project>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       </nova:owner>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <nova:root type="image" uuid="db2030d5-68ad-4c19-8554-acadfe9ba001"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <nova:ports/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     </nova:instance>
Jan 29 09:34:02 compute-0 nova_compute[236255]:   </metadata>
Jan 29 09:34:02 compute-0 nova_compute[236255]:   <sysinfo type="smbios">
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <system>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <entry name="manufacturer">RDO</entry>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <entry name="product">OpenStack Compute</entry>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <entry name="serial">b6002447-8ee9-4e0e-86cd-1e7121b5e4e0</entry>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <entry name="uuid">b6002447-8ee9-4e0e-86cd-1e7121b5e4e0</entry>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <entry name="family">Virtual Machine</entry>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     </system>
Jan 29 09:34:02 compute-0 nova_compute[236255]:   </sysinfo>
Jan 29 09:34:02 compute-0 nova_compute[236255]:   <os>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <boot dev="hd"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <smbios mode="sysinfo"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:   </os>
Jan 29 09:34:02 compute-0 nova_compute[236255]:   <features>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <acpi/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <apic/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <vmcoreinfo/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:   </features>
Jan 29 09:34:02 compute-0 nova_compute[236255]:   <clock offset="utc">
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <timer name="pit" tickpolicy="delay"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <timer name="hpet" present="no"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:   </clock>
Jan 29 09:34:02 compute-0 nova_compute[236255]:   <cpu mode="host-model" match="exact">
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <topology sockets="1" cores="1" threads="1"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:   </cpu>
Jan 29 09:34:02 compute-0 nova_compute[236255]:   <devices>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <disk type="network" device="disk">
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <driver type="raw" cache="none"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <source protocol="rbd" name="vms/b6002447-8ee9-4e0e-86cd-1e7121b5e4e0_disk">
Jan 29 09:34:02 compute-0 nova_compute[236255]:         <host name="192.168.122.100" port="6789"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       </source>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <auth username="openstack">
Jan 29 09:34:02 compute-0 nova_compute[236255]:         <secret type="ceph" uuid="3fdce3ca-565d-5459-88e8-1ffe58b48437"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       </auth>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <target dev="vda" bus="virtio"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     </disk>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <disk type="network" device="cdrom">
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <driver type="raw" cache="none"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <source protocol="rbd" name="vms/b6002447-8ee9-4e0e-86cd-1e7121b5e4e0_disk.config">
Jan 29 09:34:02 compute-0 nova_compute[236255]:         <host name="192.168.122.100" port="6789"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       </source>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <auth username="openstack">
Jan 29 09:34:02 compute-0 nova_compute[236255]:         <secret type="ceph" uuid="3fdce3ca-565d-5459-88e8-1ffe58b48437"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       </auth>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <target dev="sda" bus="sata"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     </disk>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <serial type="pty">
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <log file="/var/lib/nova/instances/b6002447-8ee9-4e0e-86cd-1e7121b5e4e0/console.log" append="off"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     </serial>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <video>
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <model type="virtio"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     </video>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <input type="tablet" bus="usb"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <rng model="virtio">
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <backend model="random">/dev/urandom</backend>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     </rng>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="pci" model="pcie-root-port"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <controller type="usb" index="0"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     <memballoon model="virtio">
Jan 29 09:34:02 compute-0 nova_compute[236255]:       <stats period="10"/>
Jan 29 09:34:02 compute-0 nova_compute[236255]:     </memballoon>
Jan 29 09:34:02 compute-0 nova_compute[236255]:   </devices>
Jan 29 09:34:02 compute-0 nova_compute[236255]: </domain>
Jan 29 09:34:02 compute-0 nova_compute[236255]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 29 09:34:02 compute-0 podman[240746]: 2026-01-29 09:34:02.287479022 +0000 UTC m=+0.142246163 container init 1a99244313a6291b3a997e4f19cdcf57d86820bf965414d81e9040c406169e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 09:34:02 compute-0 podman[240746]: 2026-01-29 09:34:02.292585621 +0000 UTC m=+0.147352722 container start 1a99244313a6291b3a997e4f19cdcf57d86820bf965414d81e9040c406169e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 29 09:34:02 compute-0 nova_compute[236255]: 2026-01-29 09:34:02.329 236262 DEBUG nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 29 09:34:02 compute-0 nova_compute[236255]: 2026-01-29 09:34:02.329 236262 DEBUG nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 29 09:34:02 compute-0 nova_compute[236255]: 2026-01-29 09:34:02.330 236262 INFO nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Using config drive
Jan 29 09:34:02 compute-0 nova_compute[236255]: 2026-01-29 09:34:02.353 236262 DEBUG nova.storage.rbd_utils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] rbd image b6002447-8ee9-4e0e-86cd-1e7121b5e4e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 29 09:34:02 compute-0 podman[240746]: 2026-01-29 09:34:02.370496896 +0000 UTC m=+0.225263977 container attach 1a99244313a6291b3a997e4f19cdcf57d86820bf965414d81e9040c406169e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 09:34:02 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1774130047' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 09:34:02 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/713305800' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 09:34:02 compute-0 nova_compute[236255]: 2026-01-29 09:34:02.510 236262 INFO nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Creating config drive at /var/lib/nova/instances/b6002447-8ee9-4e0e-86cd-1e7121b5e4e0/disk.config
Jan 29 09:34:02 compute-0 nova_compute[236255]: 2026-01-29 09:34:02.513 236262 DEBUG oslo_concurrency.processutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6002447-8ee9-4e0e-86cd-1e7121b5e4e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmputioal00 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:34:02 compute-0 zen_goodall[240764]: {
Jan 29 09:34:02 compute-0 zen_goodall[240764]:     "0": [
Jan 29 09:34:02 compute-0 zen_goodall[240764]:         {
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "devices": [
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "/dev/loop3"
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             ],
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "lv_name": "ceph_lv0",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "lv_size": "21470642176",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "name": "ceph_lv0",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "tags": {
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.cluster_name": "ceph",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.crush_device_class": "",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.encrypted": "0",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.objectstore": "bluestore",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.osd_id": "0",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.type": "block",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.vdo": "0",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.with_tpm": "0"
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             },
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "type": "block",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "vg_name": "ceph_vg0"
Jan 29 09:34:02 compute-0 zen_goodall[240764]:         }
Jan 29 09:34:02 compute-0 zen_goodall[240764]:     ],
Jan 29 09:34:02 compute-0 zen_goodall[240764]:     "1": [
Jan 29 09:34:02 compute-0 zen_goodall[240764]:         {
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "devices": [
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "/dev/loop4"
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             ],
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "lv_name": "ceph_lv1",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "lv_size": "21470642176",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "name": "ceph_lv1",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "tags": {
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.cluster_name": "ceph",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.crush_device_class": "",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.encrypted": "0",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.objectstore": "bluestore",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.osd_id": "1",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.type": "block",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.vdo": "0",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.with_tpm": "0"
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             },
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "type": "block",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "vg_name": "ceph_vg1"
Jan 29 09:34:02 compute-0 zen_goodall[240764]:         }
Jan 29 09:34:02 compute-0 zen_goodall[240764]:     ],
Jan 29 09:34:02 compute-0 zen_goodall[240764]:     "2": [
Jan 29 09:34:02 compute-0 zen_goodall[240764]:         {
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "devices": [
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "/dev/loop5"
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             ],
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "lv_name": "ceph_lv2",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "lv_size": "21470642176",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "name": "ceph_lv2",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "tags": {
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.cluster_name": "ceph",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.crush_device_class": "",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.encrypted": "0",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.objectstore": "bluestore",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.osd_id": "2",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.type": "block",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.vdo": "0",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:                 "ceph.with_tpm": "0"
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             },
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "type": "block",
Jan 29 09:34:02 compute-0 zen_goodall[240764]:             "vg_name": "ceph_vg2"
Jan 29 09:34:02 compute-0 zen_goodall[240764]:         }
Jan 29 09:34:02 compute-0 zen_goodall[240764]:     ]
Jan 29 09:34:02 compute-0 zen_goodall[240764]: }
Jan 29 09:34:02 compute-0 systemd[1]: libpod-1a99244313a6291b3a997e4f19cdcf57d86820bf965414d81e9040c406169e56.scope: Deactivated successfully.
Jan 29 09:34:02 compute-0 podman[240746]: 2026-01-29 09:34:02.575831852 +0000 UTC m=+0.430598923 container died 1a99244313a6291b3a997e4f19cdcf57d86820bf965414d81e9040c406169e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 29 09:34:02 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:34:02 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 29 09:34:02 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 29 09:34:02 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 29 09:34:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-0522cc326d0038656994f960e6af4a18d4db5fda27a23c2435978ae814bc2e95-merged.mount: Deactivated successfully.
Jan 29 09:34:02 compute-0 podman[240746]: 2026-01-29 09:34:02.632694206 +0000 UTC m=+0.487461297 container remove 1a99244313a6291b3a997e4f19cdcf57d86820bf965414d81e9040c406169e56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_goodall, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 09:34:02 compute-0 nova_compute[236255]: 2026-01-29 09:34:02.637 236262 DEBUG oslo_concurrency.processutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6002447-8ee9-4e0e-86cd-1e7121b5e4e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmputioal00" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:34:02 compute-0 systemd[1]: libpod-conmon-1a99244313a6291b3a997e4f19cdcf57d86820bf965414d81e9040c406169e56.scope: Deactivated successfully.
Jan 29 09:34:02 compute-0 nova_compute[236255]: 2026-01-29 09:34:02.669 236262 DEBUG nova.storage.rbd_utils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] rbd image b6002447-8ee9-4e0e-86cd-1e7121b5e4e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 29 09:34:02 compute-0 nova_compute[236255]: 2026-01-29 09:34:02.672 236262 DEBUG oslo_concurrency.processutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b6002447-8ee9-4e0e-86cd-1e7121b5e4e0/disk.config b6002447-8ee9-4e0e-86cd-1e7121b5e4e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:34:02 compute-0 sudo[240629]: pam_unix(sudo:session): session closed for user root
Jan 29 09:34:02 compute-0 sudo[240829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:34:02 compute-0 sudo[240829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:34:02 compute-0 sudo[240829]: pam_unix(sudo:session): session closed for user root
Jan 29 09:34:02 compute-0 sudo[240870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:34:02 compute-0 nova_compute[236255]: 2026-01-29 09:34:02.786 236262 DEBUG oslo_concurrency.processutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b6002447-8ee9-4e0e-86cd-1e7121b5e4e0/disk.config b6002447-8ee9-4e0e-86cd-1e7121b5e4e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:34:02 compute-0 nova_compute[236255]: 2026-01-29 09:34:02.787 236262 INFO nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Deleting local config drive /var/lib/nova/instances/b6002447-8ee9-4e0e-86cd-1e7121b5e4e0/disk.config because it was imported into RBD.
Jan 29 09:34:02 compute-0 sudo[240870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:34:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v738: 193 pgs: 193 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.6 KiB/s wr, 56 op/s
Jan 29 09:34:02 compute-0 systemd-machined[204395]: New machine qemu-2-instance-00000002.
Jan 29 09:34:02 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Jan 29 09:34:03 compute-0 podman[240925]: 2026-01-29 09:34:03.075680144 +0000 UTC m=+0.052448735 container create b3f13e76d76cefc879f86f74a6527f4d396626766682612c2868ecb1b6f4ef05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_almeida, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:34:03 compute-0 systemd[1]: Started libpod-conmon-b3f13e76d76cefc879f86f74a6527f4d396626766682612c2868ecb1b6f4ef05.scope.
Jan 29 09:34:03 compute-0 podman[240925]: 2026-01-29 09:34:03.052283959 +0000 UTC m=+0.029052590 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:34:03 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:34:03 compute-0 podman[240925]: 2026-01-29 09:34:03.186473153 +0000 UTC m=+0.163241754 container init b3f13e76d76cefc879f86f74a6527f4d396626766682612c2868ecb1b6f4ef05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:34:03 compute-0 podman[240925]: 2026-01-29 09:34:03.193853603 +0000 UTC m=+0.170622204 container start b3f13e76d76cefc879f86f74a6527f4d396626766682612c2868ecb1b6f4ef05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_almeida, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Jan 29 09:34:03 compute-0 podman[240925]: 2026-01-29 09:34:03.198710755 +0000 UTC m=+0.175479386 container attach b3f13e76d76cefc879f86f74a6527f4d396626766682612c2868ecb1b6f4ef05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_almeida, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Jan 29 09:34:03 compute-0 brave_almeida[240942]: 167 167
Jan 29 09:34:03 compute-0 systemd[1]: libpod-b3f13e76d76cefc879f86f74a6527f4d396626766682612c2868ecb1b6f4ef05.scope: Deactivated successfully.
Jan 29 09:34:03 compute-0 conmon[240942]: conmon b3f13e76d76cefc879f8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b3f13e76d76cefc879f86f74a6527f4d396626766682612c2868ecb1b6f4ef05.scope/container/memory.events
Jan 29 09:34:03 compute-0 podman[240925]: 2026-01-29 09:34:03.202879988 +0000 UTC m=+0.179648579 container died b3f13e76d76cefc879f86f74a6527f4d396626766682612c2868ecb1b6f4ef05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_almeida, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:34:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-90170e6ee3316aef32553cf6a1080aef905a84e8676040e50ca8cb1438edc3a0-merged.mount: Deactivated successfully.
Jan 29 09:34:03 compute-0 podman[240925]: 2026-01-29 09:34:03.247660854 +0000 UTC m=+0.224429445 container remove b3f13e76d76cefc879f86f74a6527f4d396626766682612c2868ecb1b6f4ef05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_almeida, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 29 09:34:03 compute-0 systemd[1]: libpod-conmon-b3f13e76d76cefc879f86f74a6527f4d396626766682612c2868ecb1b6f4ef05.scope: Deactivated successfully.
Jan 29 09:34:03 compute-0 podman[240965]: 2026-01-29 09:34:03.412790658 +0000 UTC m=+0.044758976 container create 5a4250a927f375609cfe65096ba967038269829de8f01b6c9a863546ca7b5fd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:34:03 compute-0 systemd[1]: Started libpod-conmon-5a4250a927f375609cfe65096ba967038269829de8f01b6c9a863546ca7b5fd5.scope.
Jan 29 09:34:03 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:34:03 compute-0 podman[240965]: 2026-01-29 09:34:03.390853342 +0000 UTC m=+0.022821700 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b911e15208672c476f05ff35508047696a6a3d77329e93f1c3d79e5503b06b48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b911e15208672c476f05ff35508047696a6a3d77329e93f1c3d79e5503b06b48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b911e15208672c476f05ff35508047696a6a3d77329e93f1c3d79e5503b06b48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b911e15208672c476f05ff35508047696a6a3d77329e93f1c3d79e5503b06b48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:34:03 compute-0 podman[240965]: 2026-01-29 09:34:03.50752107 +0000 UTC m=+0.139489438 container init 5a4250a927f375609cfe65096ba967038269829de8f01b6c9a863546ca7b5fd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 29 09:34:03 compute-0 podman[240965]: 2026-01-29 09:34:03.514464649 +0000 UTC m=+0.146432967 container start 5a4250a927f375609cfe65096ba967038269829de8f01b6c9a863546ca7b5fd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 09:34:03 compute-0 podman[240965]: 2026-01-29 09:34:03.518437417 +0000 UTC m=+0.150405855 container attach 5a4250a927f375609cfe65096ba967038269829de8f01b6c9a863546ca7b5fd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 29 09:34:03 compute-0 ceph-mon[75183]: osdmap e84: 3 total, 3 up, 3 in
Jan 29 09:34:03 compute-0 ceph-mon[75183]: pgmap v738: 193 pgs: 193 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.6 KiB/s wr, 56 op/s
Jan 29 09:34:04 compute-0 lvm[241069]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:34:04 compute-0 lvm[241070]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:34:04 compute-0 lvm[241068]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:34:04 compute-0 lvm[241068]: VG ceph_vg0 finished
Jan 29 09:34:04 compute-0 lvm[241069]: VG ceph_vg2 finished
Jan 29 09:34:04 compute-0 lvm[241070]: VG ceph_vg1 finished
Jan 29 09:34:04 compute-0 lucid_dirac[240982]: {}
Jan 29 09:34:04 compute-0 systemd[1]: libpod-5a4250a927f375609cfe65096ba967038269829de8f01b6c9a863546ca7b5fd5.scope: Deactivated successfully.
Jan 29 09:34:04 compute-0 systemd[1]: libpod-5a4250a927f375609cfe65096ba967038269829de8f01b6c9a863546ca7b5fd5.scope: Consumed 1.171s CPU time.
Jan 29 09:34:04 compute-0 podman[240965]: 2026-01-29 09:34:04.292324551 +0000 UTC m=+0.924292899 container died 5a4250a927f375609cfe65096ba967038269829de8f01b6c9a863546ca7b5fd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 09:34:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b911e15208672c476f05ff35508047696a6a3d77329e93f1c3d79e5503b06b48-merged.mount: Deactivated successfully.
Jan 29 09:34:04 compute-0 podman[240965]: 2026-01-29 09:34:04.337756475 +0000 UTC m=+0.969724803 container remove 5a4250a927f375609cfe65096ba967038269829de8f01b6c9a863546ca7b5fd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.336 236262 DEBUG nova.virt.driver [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Emitting event <LifecycleEvent: 1769679244.3339863, b6002447-8ee9-4e0e-86cd-1e7121b5e4e0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.340 236262 INFO nova.compute.manager [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] VM Resumed (Lifecycle Event)
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.343 236262 DEBUG nova.compute.manager [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 29 09:34:04 compute-0 systemd[1]: libpod-conmon-5a4250a927f375609cfe65096ba967038269829de8f01b6c9a863546ca7b5fd5.scope: Deactivated successfully.
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.343 236262 DEBUG nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.347 236262 INFO nova.virt.libvirt.driver [-] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Instance spawned successfully.
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.347 236262 DEBUG nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.372 236262 DEBUG nova.compute.manager [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.376 236262 DEBUG nova.compute.manager [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 29 09:34:04 compute-0 sudo[240870]: pam_unix(sudo:session): session closed for user root
Jan 29 09:34:04 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:34:04 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:34:04 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:34:04 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.414 236262 INFO nova.compute.manager [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.415 236262 DEBUG nova.virt.driver [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] Emitting event <LifecycleEvent: 1769679244.3361733, b6002447-8ee9-4e0e-86cd-1e7121b5e4e0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.415 236262 INFO nova.compute.manager [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] VM Started (Lifecycle Event)
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.423 236262 DEBUG nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.424 236262 DEBUG nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.425 236262 DEBUG nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.425 236262 DEBUG nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.426 236262 DEBUG nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.426 236262 DEBUG nova.virt.libvirt.driver [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.433 236262 DEBUG nova.compute.manager [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.436 236262 DEBUG nova.compute.manager [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.453 236262 INFO nova.compute.manager [None req-3806e64e-5dec-43c4-b32a-d848c489a250 - - - - - -] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 29 09:34:04 compute-0 sudo[241120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:34:04 compute-0 sudo[241120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:34:04 compute-0 sudo[241120]: pam_unix(sudo:session): session closed for user root
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.480 236262 INFO nova.compute.manager [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Took 4.26 seconds to spawn the instance on the hypervisor.
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.481 236262 DEBUG nova.compute.manager [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.529 236262 INFO nova.compute.manager [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Took 5.29 seconds to build instance.
Jan 29 09:34:04 compute-0 nova_compute[236255]: 2026-01-29 09:34:04.545 236262 DEBUG oslo_concurrency.lockutils [None req-d2b13389-4b38-4e9f-812d-2f4688c5e39c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "b6002447-8ee9-4e0e-86cd-1e7121b5e4e0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:34:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v739: 193 pgs: 193 active+clean; 42 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 23 KiB/s wr, 114 op/s
Jan 29 09:34:05 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:34:05 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:34:06 compute-0 ceph-mon[75183]: pgmap v739: 193 pgs: 193 active+clean; 42 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 23 KiB/s wr, 114 op/s
Jan 29 09:34:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v740: 193 pgs: 193 active+clean; 42 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 19 KiB/s wr, 93 op/s
Jan 29 09:34:07 compute-0 nova_compute[236255]: 2026-01-29 09:34:07.041 236262 DEBUG nova.compute.manager [None req-476973cb-5f50-4240-b1a8-d1d816e79e48 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 29 09:34:07 compute-0 nova_compute[236255]: 2026-01-29 09:34:07.091 236262 INFO nova.compute.manager [None req-476973cb-5f50-4240-b1a8-d1d816e79e48 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] instance snapshotting
Jan 29 09:34:07 compute-0 nova_compute[236255]: 2026-01-29 09:34:07.327 236262 INFO nova.virt.libvirt.driver [None req-476973cb-5f50-4240-b1a8-d1d816e79e48 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Beginning live snapshot process
Jan 29 09:34:07 compute-0 nova_compute[236255]: 2026-01-29 09:34:07.468 236262 DEBUG nova.storage.rbd_utils [None req-476973cb-5f50-4240-b1a8-d1d816e79e48 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] creating snapshot(337cbeefb3664cd6808daa361ce0d15a) on rbd image(b6002447-8ee9-4e0e-86cd-1e7121b5e4e0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 29 09:34:07 compute-0 ceph-mon[75183]: pgmap v740: 193 pgs: 193 active+clean; 42 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 19 KiB/s wr, 93 op/s
Jan 29 09:34:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:34:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 29 09:34:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 29 09:34:08 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 29 09:34:08 compute-0 nova_compute[236255]: 2026-01-29 09:34:08.526 236262 DEBUG nova.storage.rbd_utils [None req-476973cb-5f50-4240-b1a8-d1d816e79e48 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] cloning vms/b6002447-8ee9-4e0e-86cd-1e7121b5e4e0_disk@337cbeefb3664cd6808daa361ce0d15a to images/3838ec20-fbf9-4309-bb3b-dc217561d88f clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 29 09:34:08 compute-0 nova_compute[236255]: 2026-01-29 09:34:08.639 236262 DEBUG nova.storage.rbd_utils [None req-476973cb-5f50-4240-b1a8-d1d816e79e48 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] flattening images/3838ec20-fbf9-4309-bb3b-dc217561d88f flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 29 09:34:08 compute-0 nova_compute[236255]: 2026-01-29 09:34:08.777 236262 DEBUG nova.storage.rbd_utils [None req-476973cb-5f50-4240-b1a8-d1d816e79e48 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] removing snapshot(337cbeefb3664cd6808daa361ce0d15a) on rbd image(b6002447-8ee9-4e0e-86cd-1e7121b5e4e0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 29 09:34:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v742: 193 pgs: 193 active+clean; 42 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 20 KiB/s wr, 65 op/s
Jan 29 09:34:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:34:09.035 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:34:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:34:09.036 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:34:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:34:09.036 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:34:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 29 09:34:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 29 09:34:09 compute-0 ceph-mon[75183]: osdmap e85: 3 total, 3 up, 3 in
Jan 29 09:34:09 compute-0 ceph-mon[75183]: pgmap v742: 193 pgs: 193 active+clean; 42 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 20 KiB/s wr, 65 op/s
Jan 29 09:34:09 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 29 09:34:09 compute-0 nova_compute[236255]: 2026-01-29 09:34:09.544 236262 DEBUG nova.storage.rbd_utils [None req-476973cb-5f50-4240-b1a8-d1d816e79e48 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] creating snapshot(snap) on rbd image(3838ec20-fbf9-4309-bb3b-dc217561d88f) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 29 09:34:10 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 29 09:34:10 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 29 09:34:10 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 29 09:34:10 compute-0 ceph-mon[75183]: osdmap e86: 3 total, 3 up, 3 in
Jan 29 09:34:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v745: 193 pgs: 193 active+clean; 42 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 4.0 KiB/s wr, 101 op/s
Jan 29 09:34:11 compute-0 ceph-mon[75183]: osdmap e87: 3 total, 3 up, 3 in
Jan 29 09:34:11 compute-0 ceph-mon[75183]: pgmap v745: 193 pgs: 193 active+clean; 42 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 4.0 KiB/s wr, 101 op/s
Jan 29 09:34:11 compute-0 nova_compute[236255]: 2026-01-29 09:34:11.748 236262 INFO nova.virt.libvirt.driver [None req-476973cb-5f50-4240-b1a8-d1d816e79e48 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Snapshot image upload complete
Jan 29 09:34:11 compute-0 nova_compute[236255]: 2026-01-29 09:34:11.749 236262 INFO nova.compute.manager [None req-476973cb-5f50-4240-b1a8-d1d816e79e48 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Took 4.65 seconds to snapshot the instance on the hypervisor.
Jan 29 09:34:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:34:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v746: 193 pgs: 193 active+clean; 42 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 4.0 KiB/s wr, 101 op/s
Jan 29 09:34:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 29 09:34:13 compute-0 ceph-mon[75183]: pgmap v746: 193 pgs: 193 active+clean; 42 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 4.0 KiB/s wr, 101 op/s
Jan 29 09:34:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 29 09:34:13 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 29 09:34:14 compute-0 nova_compute[236255]: 2026-01-29 09:34:14.459 236262 DEBUG oslo_concurrency.lockutils [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Acquiring lock "b6002447-8ee9-4e0e-86cd-1e7121b5e4e0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:34:14 compute-0 nova_compute[236255]: 2026-01-29 09:34:14.460 236262 DEBUG oslo_concurrency.lockutils [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "b6002447-8ee9-4e0e-86cd-1e7121b5e4e0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:34:14 compute-0 nova_compute[236255]: 2026-01-29 09:34:14.460 236262 DEBUG oslo_concurrency.lockutils [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Acquiring lock "b6002447-8ee9-4e0e-86cd-1e7121b5e4e0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:34:14 compute-0 nova_compute[236255]: 2026-01-29 09:34:14.461 236262 DEBUG oslo_concurrency.lockutils [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "b6002447-8ee9-4e0e-86cd-1e7121b5e4e0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:34:14 compute-0 nova_compute[236255]: 2026-01-29 09:34:14.461 236262 DEBUG oslo_concurrency.lockutils [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "b6002447-8ee9-4e0e-86cd-1e7121b5e4e0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:34:14 compute-0 nova_compute[236255]: 2026-01-29 09:34:14.462 236262 INFO nova.compute.manager [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Terminating instance
Jan 29 09:34:14 compute-0 nova_compute[236255]: 2026-01-29 09:34:14.463 236262 DEBUG oslo_concurrency.lockutils [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Acquiring lock "refresh_cache-b6002447-8ee9-4e0e-86cd-1e7121b5e4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 29 09:34:14 compute-0 nova_compute[236255]: 2026-01-29 09:34:14.463 236262 DEBUG oslo_concurrency.lockutils [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Acquired lock "refresh_cache-b6002447-8ee9-4e0e-86cd-1e7121b5e4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 29 09:34:14 compute-0 nova_compute[236255]: 2026-01-29 09:34:14.463 236262 DEBUG nova.network.neutron [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 29 09:34:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v748: 193 pgs: 193 active+clean; 42 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 167 KiB/s rd, 8.3 KiB/s wr, 216 op/s
Jan 29 09:34:14 compute-0 ceph-mon[75183]: osdmap e88: 3 total, 3 up, 3 in
Jan 29 09:34:15 compute-0 nova_compute[236255]: 2026-01-29 09:34:15.117 236262 DEBUG nova.network.neutron [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 29 09:34:15 compute-0 nova_compute[236255]: 2026-01-29 09:34:15.356 236262 DEBUG nova.network.neutron [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 29 09:34:15 compute-0 nova_compute[236255]: 2026-01-29 09:34:15.371 236262 DEBUG oslo_concurrency.lockutils [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Releasing lock "refresh_cache-b6002447-8ee9-4e0e-86cd-1e7121b5e4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 29 09:34:15 compute-0 nova_compute[236255]: 2026-01-29 09:34:15.371 236262 DEBUG nova.compute.manager [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 29 09:34:15 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Jan 29 09:34:15 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 1.909s CPU time.
Jan 29 09:34:15 compute-0 systemd-machined[204395]: Machine qemu-2-instance-00000002 terminated.
Jan 29 09:34:15 compute-0 nova_compute[236255]: 2026-01-29 09:34:15.599 236262 INFO nova.virt.libvirt.driver [-] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Instance destroyed successfully.
Jan 29 09:34:15 compute-0 nova_compute[236255]: 2026-01-29 09:34:15.601 236262 DEBUG nova.objects.instance [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lazy-loading 'resources' on Instance uuid b6002447-8ee9-4e0e-86cd-1e7121b5e4e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 29 09:34:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 29 09:34:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 29 09:34:15 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 29 09:34:15 compute-0 ceph-mon[75183]: pgmap v748: 193 pgs: 193 active+clean; 42 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 167 KiB/s rd, 8.3 KiB/s wr, 216 op/s
Jan 29 09:34:16 compute-0 nova_compute[236255]: 2026-01-29 09:34:16.107 236262 INFO nova.virt.libvirt.driver [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Deleting instance files /var/lib/nova/instances/b6002447-8ee9-4e0e-86cd-1e7121b5e4e0_del
Jan 29 09:34:16 compute-0 nova_compute[236255]: 2026-01-29 09:34:16.109 236262 INFO nova.virt.libvirt.driver [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Deletion of /var/lib/nova/instances/b6002447-8ee9-4e0e-86cd-1e7121b5e4e0_del complete
Jan 29 09:34:16 compute-0 nova_compute[236255]: 2026-01-29 09:34:16.159 236262 DEBUG nova.virt.libvirt.host [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Jan 29 09:34:16 compute-0 nova_compute[236255]: 2026-01-29 09:34:16.160 236262 INFO nova.virt.libvirt.host [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] UEFI support detected
Jan 29 09:34:16 compute-0 nova_compute[236255]: 2026-01-29 09:34:16.162 236262 INFO nova.compute.manager [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Took 0.79 seconds to destroy the instance on the hypervisor.
Jan 29 09:34:16 compute-0 nova_compute[236255]: 2026-01-29 09:34:16.162 236262 DEBUG oslo.service.loopingcall [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 29 09:34:16 compute-0 nova_compute[236255]: 2026-01-29 09:34:16.162 236262 DEBUG nova.compute.manager [-] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 29 09:34:16 compute-0 nova_compute[236255]: 2026-01-29 09:34:16.163 236262 DEBUG nova.network.neutron [-] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 29 09:34:16 compute-0 nova_compute[236255]: 2026-01-29 09:34:16.319 236262 DEBUG nova.network.neutron [-] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 29 09:34:16 compute-0 nova_compute[236255]: 2026-01-29 09:34:16.337 236262 DEBUG nova.network.neutron [-] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 29 09:34:16 compute-0 nova_compute[236255]: 2026-01-29 09:34:16.350 236262 INFO nova.compute.manager [-] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Took 0.19 seconds to deallocate network for instance.
Jan 29 09:34:16 compute-0 nova_compute[236255]: 2026-01-29 09:34:16.384 236262 DEBUG oslo_concurrency.lockutils [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:34:16 compute-0 nova_compute[236255]: 2026-01-29 09:34:16.384 236262 DEBUG oslo_concurrency.lockutils [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:34:16 compute-0 nova_compute[236255]: 2026-01-29 09:34:16.450 236262 DEBUG oslo_concurrency.processutils [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:34:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v750: 193 pgs: 193 active+clean; 42 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 4.1 KiB/s wr, 109 op/s
Jan 29 09:34:16 compute-0 ceph-mon[75183]: osdmap e89: 3 total, 3 up, 3 in
Jan 29 09:34:17 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:34:17 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1262447667' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:34:17 compute-0 nova_compute[236255]: 2026-01-29 09:34:17.019 236262 DEBUG oslo_concurrency.processutils [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:34:17 compute-0 nova_compute[236255]: 2026-01-29 09:34:17.025 236262 DEBUG nova.compute.provider_tree [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Inventory has not changed in ProviderTree for provider: 2689825d-8fa0-473a-adf1-5005faba9bec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 29 09:34:17 compute-0 nova_compute[236255]: 2026-01-29 09:34:17.042 236262 DEBUG nova.scheduler.client.report [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Inventory has not changed for provider 2689825d-8fa0-473a-adf1-5005faba9bec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 29 09:34:17 compute-0 nova_compute[236255]: 2026-01-29 09:34:17.064 236262 DEBUG oslo_concurrency.lockutils [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:34:17 compute-0 nova_compute[236255]: 2026-01-29 09:34:17.092 236262 INFO nova.scheduler.client.report [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Deleted allocations for instance b6002447-8ee9-4e0e-86cd-1e7121b5e4e0
Jan 29 09:34:17 compute-0 nova_compute[236255]: 2026-01-29 09:34:17.176 236262 DEBUG oslo_concurrency.lockutils [None req-6b71b59e-4fb3-4ec1-85d1-d866d2ab396c 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "b6002447-8ee9-4e0e-86cd-1e7121b5e4e0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:34:17 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:34:17 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 29 09:34:17 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 29 09:34:17 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 29 09:34:17 compute-0 nova_compute[236255]: 2026-01-29 09:34:17.806 236262 DEBUG oslo_concurrency.lockutils [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Acquiring lock "91dbe8b1-3e4d-436e-966e-05af44b988c4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:34:17 compute-0 nova_compute[236255]: 2026-01-29 09:34:17.806 236262 DEBUG oslo_concurrency.lockutils [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "91dbe8b1-3e4d-436e-966e-05af44b988c4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:34:17 compute-0 nova_compute[236255]: 2026-01-29 09:34:17.807 236262 DEBUG oslo_concurrency.lockutils [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Acquiring lock "91dbe8b1-3e4d-436e-966e-05af44b988c4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:34:17 compute-0 nova_compute[236255]: 2026-01-29 09:34:17.807 236262 DEBUG oslo_concurrency.lockutils [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "91dbe8b1-3e4d-436e-966e-05af44b988c4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:34:17 compute-0 nova_compute[236255]: 2026-01-29 09:34:17.808 236262 DEBUG oslo_concurrency.lockutils [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "91dbe8b1-3e4d-436e-966e-05af44b988c4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:34:17 compute-0 nova_compute[236255]: 2026-01-29 09:34:17.810 236262 INFO nova.compute.manager [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Terminating instance
Jan 29 09:34:17 compute-0 nova_compute[236255]: 2026-01-29 09:34:17.811 236262 DEBUG oslo_concurrency.lockutils [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Acquiring lock "refresh_cache-91dbe8b1-3e4d-436e-966e-05af44b988c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 29 09:34:17 compute-0 nova_compute[236255]: 2026-01-29 09:34:17.812 236262 DEBUG oslo_concurrency.lockutils [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Acquired lock "refresh_cache-91dbe8b1-3e4d-436e-966e-05af44b988c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 29 09:34:17 compute-0 nova_compute[236255]: 2026-01-29 09:34:17.812 236262 DEBUG nova.network.neutron [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 29 09:34:17 compute-0 ceph-mon[75183]: pgmap v750: 193 pgs: 193 active+clean; 42 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 4.1 KiB/s wr, 109 op/s
Jan 29 09:34:17 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1262447667' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:34:17 compute-0 ceph-mon[75183]: osdmap e90: 3 total, 3 up, 3 in
Jan 29 09:34:18 compute-0 podman[241330]: 2026-01-29 09:34:18.146749572 +0000 UTC m=+0.074963496 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 29 09:34:18 compute-0 nova_compute[236255]: 2026-01-29 09:34:18.237 236262 DEBUG nova.network.neutron [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 29 09:34:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v752: 193 pgs: 193 active+clean; 42 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 4.3 KiB/s wr, 114 op/s
Jan 29 09:34:18 compute-0 nova_compute[236255]: 2026-01-29 09:34:18.810 236262 DEBUG nova.network.neutron [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 29 09:34:18 compute-0 nova_compute[236255]: 2026-01-29 09:34:18.824 236262 DEBUG oslo_concurrency.lockutils [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Releasing lock "refresh_cache-91dbe8b1-3e4d-436e-966e-05af44b988c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 29 09:34:18 compute-0 nova_compute[236255]: 2026-01-29 09:34:18.825 236262 DEBUG nova.compute.manager [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 29 09:34:18 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 29 09:34:18 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 1.161s CPU time.
Jan 29 09:34:18 compute-0 systemd-machined[204395]: Machine qemu-1-instance-00000001 terminated.
Jan 29 09:34:19 compute-0 nova_compute[236255]: 2026-01-29 09:34:19.044 236262 INFO nova.virt.libvirt.driver [-] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Instance destroyed successfully.
Jan 29 09:34:19 compute-0 nova_compute[236255]: 2026-01-29 09:34:19.045 236262 DEBUG nova.objects.instance [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lazy-loading 'resources' on Instance uuid 91dbe8b1-3e4d-436e-966e-05af44b988c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 29 09:34:19 compute-0 nova_compute[236255]: 2026-01-29 09:34:19.205 236262 INFO nova.virt.libvirt.driver [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Deleting instance files /var/lib/nova/instances/91dbe8b1-3e4d-436e-966e-05af44b988c4_del
Jan 29 09:34:19 compute-0 nova_compute[236255]: 2026-01-29 09:34:19.206 236262 INFO nova.virt.libvirt.driver [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Deletion of /var/lib/nova/instances/91dbe8b1-3e4d-436e-966e-05af44b988c4_del complete
Jan 29 09:34:19 compute-0 nova_compute[236255]: 2026-01-29 09:34:19.266 236262 INFO nova.compute.manager [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Took 0.44 seconds to destroy the instance on the hypervisor.
Jan 29 09:34:19 compute-0 nova_compute[236255]: 2026-01-29 09:34:19.267 236262 DEBUG oslo.service.loopingcall [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 29 09:34:19 compute-0 nova_compute[236255]: 2026-01-29 09:34:19.267 236262 DEBUG nova.compute.manager [-] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 29 09:34:19 compute-0 nova_compute[236255]: 2026-01-29 09:34:19.268 236262 DEBUG nova.network.neutron [-] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 29 09:34:19 compute-0 ceph-mon[75183]: pgmap v752: 193 pgs: 193 active+clean; 42 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 4.3 KiB/s wr, 114 op/s
Jan 29 09:34:20 compute-0 nova_compute[236255]: 2026-01-29 09:34:20.373 236262 DEBUG nova.network.neutron [-] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 29 09:34:20 compute-0 nova_compute[236255]: 2026-01-29 09:34:20.388 236262 DEBUG nova.network.neutron [-] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 29 09:34:20 compute-0 nova_compute[236255]: 2026-01-29 09:34:20.401 236262 INFO nova.compute.manager [-] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Took 1.13 seconds to deallocate network for instance.
Jan 29 09:34:20 compute-0 nova_compute[236255]: 2026-01-29 09:34:20.745 236262 INFO nova.compute.manager [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Took 0.34 seconds to detach 1 volumes for instance.
Jan 29 09:34:20 compute-0 nova_compute[236255]: 2026-01-29 09:34:20.746 236262 DEBUG nova.compute.manager [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Deleting volume: d774a831-6191-4c45-a2c6-9de804ae8122 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Jan 29 09:34:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v753: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 130 KiB/s rd, 6.8 KiB/s wr, 172 op/s
Jan 29 09:34:20 compute-0 nova_compute[236255]: 2026-01-29 09:34:20.979 236262 DEBUG oslo_concurrency.lockutils [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:34:20 compute-0 nova_compute[236255]: 2026-01-29 09:34:20.980 236262 DEBUG oslo_concurrency.lockutils [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:34:21 compute-0 nova_compute[236255]: 2026-01-29 09:34:21.022 236262 DEBUG oslo_concurrency.processutils [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:34:21 compute-0 podman[241380]: 2026-01-29 09:34:21.134946964 +0000 UTC m=+0.064762870 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 29 09:34:21 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:34:21 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/49332447' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:34:21 compute-0 nova_compute[236255]: 2026-01-29 09:34:21.576 236262 DEBUG oslo_concurrency.processutils [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:34:21 compute-0 nova_compute[236255]: 2026-01-29 09:34:21.581 236262 DEBUG nova.compute.provider_tree [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Inventory has not changed in ProviderTree for provider: 2689825d-8fa0-473a-adf1-5005faba9bec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 29 09:34:21 compute-0 nova_compute[236255]: 2026-01-29 09:34:21.604 236262 DEBUG nova.scheduler.client.report [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Inventory has not changed for provider 2689825d-8fa0-473a-adf1-5005faba9bec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 29 09:34:21 compute-0 nova_compute[236255]: 2026-01-29 09:34:21.626 236262 DEBUG oslo_concurrency.lockutils [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:34:21 compute-0 nova_compute[236255]: 2026-01-29 09:34:21.665 236262 INFO nova.scheduler.client.report [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Deleted allocations for instance 91dbe8b1-3e4d-436e-966e-05af44b988c4
Jan 29 09:34:21 compute-0 nova_compute[236255]: 2026-01-29 09:34:21.740 236262 DEBUG oslo_concurrency.lockutils [None req-5ac5d7d3-4246-4573-a54a-0f681229784b 6ec470a589594a43a020f2568556969f 401d3ff8369e48ee9848bf6e778112a3 - - default default] Lock "91dbe8b1-3e4d-436e-966e-05af44b988c4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:34:21 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 29 09:34:21 compute-0 ceph-mon[75183]: pgmap v753: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 130 KiB/s rd, 6.8 KiB/s wr, 172 op/s
Jan 29 09:34:21 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/49332447' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:34:21 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 29 09:34:21 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 29 09:34:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 09:34:22 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3087946384' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:34:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 09:34:22 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3087946384' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:34:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:34:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 29 09:34:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 29 09:34:22 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 29 09:34:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v756: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.5 KiB/s wr, 84 op/s
Jan 29 09:34:22 compute-0 ceph-mon[75183]: osdmap e91: 3 total, 3 up, 3 in
Jan 29 09:34:22 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/3087946384' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:34:22 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/3087946384' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:34:22 compute-0 ceph-mon[75183]: osdmap e92: 3 total, 3 up, 3 in
Jan 29 09:34:23 compute-0 ceph-mon[75183]: pgmap v756: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.5 KiB/s wr, 84 op/s
Jan 29 09:34:24 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:34:24.552 152476 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:86:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:f9:50:a2:e1:9f'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 29 09:34:24 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:34:24.553 152476 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 29 09:34:24 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:34:24.554 152476 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=347a774e-f56f-46e9-8fb5-240ce07d1693, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 29 09:34:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v757: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 3.9 KiB/s wr, 107 op/s
Jan 29 09:34:24 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 29 09:34:24 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:34:24.988528) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 09:34:24 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 29 09:34:24 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679264988558, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1779, "num_deletes": 257, "total_data_size": 1812836, "memory_usage": 1847200, "flush_reason": "Manual Compaction"}
Jan 29 09:34:24 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 29 09:34:24 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679264998445, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 1762226, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14095, "largest_seqno": 15873, "table_properties": {"data_size": 1753720, "index_size": 5260, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17399, "raw_average_key_size": 20, "raw_value_size": 1736732, "raw_average_value_size": 2055, "num_data_blocks": 236, "num_entries": 845, "num_filter_entries": 845, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769679153, "oldest_key_time": 1769679153, "file_creation_time": 1769679264, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:34:24 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 9978 microseconds, and 4267 cpu microseconds.
Jan 29 09:34:24 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:34:24.998499) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 1762226 bytes OK
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:34:24.998520) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:34:25.000035) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:34:25.000049) EVENT_LOG_v1 {"time_micros": 1769679265000044, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:34:25.000068) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1805046, prev total WAL file size 1805046, number of live WAL files 2.
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:34:25.000590) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(1720KB)], [35(5141KB)]
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679265000689, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 7027365, "oldest_snapshot_seqno": -1}
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3621 keys, 5826578 bytes, temperature: kUnknown
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679265042354, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 5826578, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5799230, "index_size": 17190, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9093, "raw_key_size": 86013, "raw_average_key_size": 23, "raw_value_size": 5730989, "raw_average_value_size": 1582, "num_data_blocks": 738, "num_entries": 3621, "num_filter_entries": 3621, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677896, "oldest_key_time": 0, "file_creation_time": 1769679265, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:34:25.042622) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 5826578 bytes
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:34:25.044115) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.2 rd, 139.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 5.0 +0.0 blob) out(5.6 +0.0 blob), read-write-amplify(7.3) write-amplify(3.3) OK, records in: 4145, records dropped: 524 output_compression: NoCompression
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:34:25.044154) EVENT_LOG_v1 {"time_micros": 1769679265044125, "job": 16, "event": "compaction_finished", "compaction_time_micros": 41770, "compaction_time_cpu_micros": 22937, "output_level": 6, "num_output_files": 1, "total_output_size": 5826578, "num_input_records": 4145, "num_output_records": 3621, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679265044558, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679265045346, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:34:25.000441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:34:25.045466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:34:25.045471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:34:25.045477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:34:25.045479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:34:25 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:34:25.045481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:34:25 compute-0 ceph-mon[75183]: pgmap v757: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 3.9 KiB/s wr, 107 op/s
Jan 29 09:34:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:34:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:34:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:34:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:34:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:34:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:34:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v758: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 3.5 KiB/s wr, 97 op/s
Jan 29 09:34:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:34:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 29 09:34:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 29 09:34:27 compute-0 ceph-mon[75183]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 29 09:34:27 compute-0 ceph-mon[75183]: pgmap v758: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 3.5 KiB/s wr, 97 op/s
Jan 29 09:34:27 compute-0 ceph-mon[75183]: osdmap e93: 3 total, 3 up, 3 in
Jan 29 09:34:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v760: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.0 KiB/s wr, 39 op/s
Jan 29 09:34:30 compute-0 ceph-mon[75183]: pgmap v760: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.0 KiB/s wr, 39 op/s
Jan 29 09:34:30 compute-0 nova_compute[236255]: 2026-01-29 09:34:30.598 236262 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769679255.5966992, b6002447-8ee9-4e0e-86cd-1e7121b5e4e0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 29 09:34:30 compute-0 nova_compute[236255]: 2026-01-29 09:34:30.599 236262 INFO nova.compute.manager [-] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] VM Stopped (Lifecycle Event)
Jan 29 09:34:30 compute-0 nova_compute[236255]: 2026-01-29 09:34:30.615 236262 DEBUG nova.compute.manager [None req-47e37596-3519-49db-87de-1f6fc4b2ed43 - - - - - -] [instance: b6002447-8ee9-4e0e-86cd-1e7121b5e4e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 29 09:34:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v761: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 873 B/s wr, 33 op/s
Jan 29 09:34:32 compute-0 ceph-mon[75183]: pgmap v761: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 873 B/s wr, 33 op/s
Jan 29 09:34:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:34:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v762: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 716 B/s wr, 27 op/s
Jan 29 09:34:34 compute-0 ceph-mon[75183]: pgmap v762: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 716 B/s wr, 27 op/s
Jan 29 09:34:34 compute-0 nova_compute[236255]: 2026-01-29 09:34:34.042 236262 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769679259.0402362, 91dbe8b1-3e4d-436e-966e-05af44b988c4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 29 09:34:34 compute-0 nova_compute[236255]: 2026-01-29 09:34:34.043 236262 INFO nova.compute.manager [-] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] VM Stopped (Lifecycle Event)
Jan 29 09:34:34 compute-0 nova_compute[236255]: 2026-01-29 09:34:34.063 236262 DEBUG nova.compute.manager [None req-a58f9806-39c4-4ede-ada2-a4ed913692b9 - - - - - -] [instance: 91dbe8b1-3e4d-436e-966e-05af44b988c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 29 09:34:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v763: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:36 compute-0 ceph-mon[75183]: pgmap v763: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v764: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:34:38 compute-0 ceph-mon[75183]: pgmap v764: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v765: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:40 compute-0 ceph-mon[75183]: pgmap v765: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v766: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:42 compute-0 ceph-mon[75183]: pgmap v766: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:34:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v767: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:43 compute-0 nova_compute[236255]: 2026-01-29 09:34:43.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:34:43 compute-0 nova_compute[236255]: 2026-01-29 09:34:43.556 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 29 09:34:44 compute-0 ceph-mon[75183]: pgmap v767: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:44 compute-0 nova_compute[236255]: 2026-01-29 09:34:44.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:34:44 compute-0 nova_compute[236255]: 2026-01-29 09:34:44.556 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:34:44 compute-0 nova_compute[236255]: 2026-01-29 09:34:44.583 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:34:44 compute-0 nova_compute[236255]: 2026-01-29 09:34:44.584 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:34:44 compute-0 nova_compute[236255]: 2026-01-29 09:34:44.584 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:34:44 compute-0 nova_compute[236255]: 2026-01-29 09:34:44.584 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 29 09:34:44 compute-0 nova_compute[236255]: 2026-01-29 09:34:44.584 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:34:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v768: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:34:45 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3714707454' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:34:45 compute-0 nova_compute[236255]: 2026-01-29 09:34:45.121 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:34:45 compute-0 nova_compute[236255]: 2026-01-29 09:34:45.253 236262 WARNING nova.virt.libvirt.driver [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 29 09:34:45 compute-0 nova_compute[236255]: 2026-01-29 09:34:45.254 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5206MB free_disk=59.98826471157372GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 29 09:34:45 compute-0 nova_compute[236255]: 2026-01-29 09:34:45.254 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:34:45 compute-0 nova_compute[236255]: 2026-01-29 09:34:45.254 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:34:45 compute-0 nova_compute[236255]: 2026-01-29 09:34:45.308 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 29 09:34:45 compute-0 nova_compute[236255]: 2026-01-29 09:34:45.308 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 29 09:34:45 compute-0 nova_compute[236255]: 2026-01-29 09:34:45.330 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:34:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:34:45 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1698350258' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:34:45 compute-0 nova_compute[236255]: 2026-01-29 09:34:45.906 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:34:45 compute-0 nova_compute[236255]: 2026-01-29 09:34:45.912 236262 DEBUG nova.compute.provider_tree [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed in ProviderTree for provider: 2689825d-8fa0-473a-adf1-5005faba9bec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 29 09:34:45 compute-0 nova_compute[236255]: 2026-01-29 09:34:45.929 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed for provider 2689825d-8fa0-473a-adf1-5005faba9bec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 29 09:34:45 compute-0 nova_compute[236255]: 2026-01-29 09:34:45.955 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 29 09:34:45 compute-0 nova_compute[236255]: 2026-01-29 09:34:45.955 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:34:46 compute-0 ceph-mon[75183]: pgmap v768: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:46 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3714707454' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:34:46 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1698350258' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:34:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v769: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:46 compute-0 nova_compute[236255]: 2026-01-29 09:34:46.950 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:34:46 compute-0 nova_compute[236255]: 2026-01-29 09:34:46.950 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:34:46 compute-0 nova_compute[236255]: 2026-01-29 09:34:46.966 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:34:46 compute-0 nova_compute[236255]: 2026-01-29 09:34:46.966 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:34:46 compute-0 nova_compute[236255]: 2026-01-29 09:34:46.966 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:34:47 compute-0 nova_compute[236255]: 2026-01-29 09:34:47.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:34:47 compute-0 nova_compute[236255]: 2026-01-29 09:34:47.555 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 29 09:34:47 compute-0 nova_compute[236255]: 2026-01-29 09:34:47.555 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 29 09:34:47 compute-0 nova_compute[236255]: 2026-01-29 09:34:47.571 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 29 09:34:47 compute-0 nova_compute[236255]: 2026-01-29 09:34:47.571 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:34:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:34:48 compute-0 ceph-mon[75183]: pgmap v769: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v770: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:49 compute-0 podman[241466]: 2026-01-29 09:34:49.118804514 +0000 UTC m=+0.065259206 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 29 09:34:50 compute-0 ceph-mon[75183]: pgmap v770: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v771: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 09:34:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/438212952' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:34:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 09:34:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/438212952' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:34:52 compute-0 podman[241492]: 2026-01-29 09:34:52.127897613 +0000 UTC m=+0.070384015 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 29 09:34:52 compute-0 ceph-mon[75183]: pgmap v771: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/438212952' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:34:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/438212952' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:34:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:34:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v772: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:54 compute-0 ceph-mon[75183]: pgmap v772: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v773: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:34:56
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'vms', 'backups', 'volumes']
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:34:56 compute-0 ceph-mon[75183]: pgmap v773: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:34:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v774: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:34:58 compute-0 ceph-mon[75183]: pgmap v774: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:34:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v775: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:00 compute-0 ceph-mon[75183]: pgmap v775: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v776: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:35:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:35:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:35:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:35:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.756942845403104e-07 of space, bias 1.0, pg target 8.270828536209312e-05 quantized to 32 (current 32)
Jan 29 09:35:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:35:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.5000327611377854e-07 of space, bias 1.0, pg target 4.500098283413356e-05 quantized to 32 (current 32)
Jan 29 09:35:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:35:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:35:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:35:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006683123502180493 of space, bias 1.0, pg target 0.2004937050654148 quantized to 32 (current 32)
Jan 29 09:35:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:35:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2051960589356773e-06 of space, bias 4.0, pg target 0.0014462352707228128 quantized to 16 (current 32)
Jan 29 09:35:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:35:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:35:02 compute-0 ceph-mon[75183]: pgmap v776: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:02 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:35:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v777: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:04 compute-0 ceph-mon[75183]: pgmap v777: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:04 compute-0 sudo[241511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:35:04 compute-0 sudo[241511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:35:04 compute-0 sudo[241511]: pam_unix(sudo:session): session closed for user root
Jan 29 09:35:04 compute-0 sudo[241536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:35:04 compute-0 sudo[241536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:35:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v778: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:04 compute-0 sudo[241536]: pam_unix(sudo:session): session closed for user root
Jan 29 09:35:04 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 29 09:35:04 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 29 09:35:04 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:35:04 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:35:04 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:35:04 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:35:04 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:35:05 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:35:05 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:35:05 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:35:05 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:35:05 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:35:05 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:35:05 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:35:05 compute-0 sudo[241592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:35:05 compute-0 sudo[241592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:35:05 compute-0 sudo[241592]: pam_unix(sudo:session): session closed for user root
Jan 29 09:35:05 compute-0 sudo[241617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:35:05 compute-0 sudo[241617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:35:05 compute-0 podman[241655]: 2026-01-29 09:35:05.322766161 +0000 UTC m=+0.037682895 container create 03c718bd59a6533e967260b5b5a1a41f75070e76c1453536923c0240ade95da3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 09:35:05 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 29 09:35:05 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:35:05 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:35:05 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:35:05 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:35:05 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:35:05 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:35:05 compute-0 systemd[1]: Started libpod-conmon-03c718bd59a6533e967260b5b5a1a41f75070e76c1453536923c0240ade95da3.scope.
Jan 29 09:35:05 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:35:05 compute-0 podman[241655]: 2026-01-29 09:35:05.401999986 +0000 UTC m=+0.116916770 container init 03c718bd59a6533e967260b5b5a1a41f75070e76c1453536923c0240ade95da3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_shannon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 29 09:35:05 compute-0 podman[241655]: 2026-01-29 09:35:05.308632377 +0000 UTC m=+0.023549131 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:35:05 compute-0 podman[241655]: 2026-01-29 09:35:05.41093788 +0000 UTC m=+0.125854614 container start 03c718bd59a6533e967260b5b5a1a41f75070e76c1453536923c0240ade95da3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 29 09:35:05 compute-0 podman[241655]: 2026-01-29 09:35:05.414856756 +0000 UTC m=+0.129773520 container attach 03c718bd59a6533e967260b5b5a1a41f75070e76c1453536923c0240ade95da3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_shannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:35:05 compute-0 intelligent_shannon[241671]: 167 167
Jan 29 09:35:05 compute-0 systemd[1]: libpod-03c718bd59a6533e967260b5b5a1a41f75070e76c1453536923c0240ade95da3.scope: Deactivated successfully.
Jan 29 09:35:05 compute-0 podman[241655]: 2026-01-29 09:35:05.418813734 +0000 UTC m=+0.133730468 container died 03c718bd59a6533e967260b5b5a1a41f75070e76c1453536923c0240ade95da3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 09:35:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-26668e676b8a0a4adf355d085234b0556df4f7af91add77b7f02317cf344d4cd-merged.mount: Deactivated successfully.
Jan 29 09:35:05 compute-0 podman[241655]: 2026-01-29 09:35:05.456318994 +0000 UTC m=+0.171235728 container remove 03c718bd59a6533e967260b5b5a1a41f75070e76c1453536923c0240ade95da3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:35:05 compute-0 systemd[1]: libpod-conmon-03c718bd59a6533e967260b5b5a1a41f75070e76c1453536923c0240ade95da3.scope: Deactivated successfully.
Jan 29 09:35:05 compute-0 podman[241696]: 2026-01-29 09:35:05.612298316 +0000 UTC m=+0.045121918 container create 0c1098da00737fa8e4d77da89aa88510354cb5049ddcc673889c3bf42b1f45d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_khorana, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:35:05 compute-0 systemd[1]: Started libpod-conmon-0c1098da00737fa8e4d77da89aa88510354cb5049ddcc673889c3bf42b1f45d7.scope.
Jan 29 09:35:05 compute-0 podman[241696]: 2026-01-29 09:35:05.590508573 +0000 UTC m=+0.023332215 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:35:05 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93349c4cd534bd15ce8161ee38976f627b480feeb68f784a1b6371759095e695/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93349c4cd534bd15ce8161ee38976f627b480feeb68f784a1b6371759095e695/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93349c4cd534bd15ce8161ee38976f627b480feeb68f784a1b6371759095e695/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93349c4cd534bd15ce8161ee38976f627b480feeb68f784a1b6371759095e695/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93349c4cd534bd15ce8161ee38976f627b480feeb68f784a1b6371759095e695/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:35:05 compute-0 podman[241696]: 2026-01-29 09:35:05.705772258 +0000 UTC m=+0.138595850 container init 0c1098da00737fa8e4d77da89aa88510354cb5049ddcc673889c3bf42b1f45d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 09:35:05 compute-0 podman[241696]: 2026-01-29 09:35:05.713297043 +0000 UTC m=+0.146120605 container start 0c1098da00737fa8e4d77da89aa88510354cb5049ddcc673889c3bf42b1f45d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_khorana, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 29 09:35:05 compute-0 podman[241696]: 2026-01-29 09:35:05.716970663 +0000 UTC m=+0.149794255 container attach 0c1098da00737fa8e4d77da89aa88510354cb5049ddcc673889c3bf42b1f45d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:35:06 compute-0 fervent_khorana[241713]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:35:06 compute-0 fervent_khorana[241713]: --> All data devices are unavailable
Jan 29 09:35:06 compute-0 systemd[1]: libpod-0c1098da00737fa8e4d77da89aa88510354cb5049ddcc673889c3bf42b1f45d7.scope: Deactivated successfully.
Jan 29 09:35:06 compute-0 conmon[241713]: conmon 0c1098da00737fa8e4d7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0c1098da00737fa8e4d77da89aa88510354cb5049ddcc673889c3bf42b1f45d7.scope/container/memory.events
Jan 29 09:35:06 compute-0 podman[241696]: 2026-01-29 09:35:06.127533929 +0000 UTC m=+0.560357491 container died 0c1098da00737fa8e4d77da89aa88510354cb5049ddcc673889c3bf42b1f45d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:35:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-93349c4cd534bd15ce8161ee38976f627b480feeb68f784a1b6371759095e695-merged.mount: Deactivated successfully.
Jan 29 09:35:06 compute-0 podman[241696]: 2026-01-29 09:35:06.171096804 +0000 UTC m=+0.603920366 container remove 0c1098da00737fa8e4d77da89aa88510354cb5049ddcc673889c3bf42b1f45d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 29 09:35:06 compute-0 systemd[1]: libpod-conmon-0c1098da00737fa8e4d77da89aa88510354cb5049ddcc673889c3bf42b1f45d7.scope: Deactivated successfully.
Jan 29 09:35:06 compute-0 sudo[241617]: pam_unix(sudo:session): session closed for user root
Jan 29 09:35:06 compute-0 sudo[241746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:35:06 compute-0 sudo[241746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:35:06 compute-0 sudo[241746]: pam_unix(sudo:session): session closed for user root
Jan 29 09:35:06 compute-0 ceph-mon[75183]: pgmap v778: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:06 compute-0 sudo[241771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:35:06 compute-0 sudo[241771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:35:06 compute-0 podman[241808]: 2026-01-29 09:35:06.612650393 +0000 UTC m=+0.036184405 container create 3b46f1f69eb7fe26efdd27fa2217f5582176af22d7ec00d4e843ad05f8d91f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_yonath, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 29 09:35:06 compute-0 systemd[1]: Started libpod-conmon-3b46f1f69eb7fe26efdd27fa2217f5582176af22d7ec00d4e843ad05f8d91f02.scope.
Jan 29 09:35:06 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:35:06 compute-0 podman[241808]: 2026-01-29 09:35:06.681002662 +0000 UTC m=+0.104536664 container init 3b46f1f69eb7fe26efdd27fa2217f5582176af22d7ec00d4e843ad05f8d91f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_yonath, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:35:06 compute-0 podman[241808]: 2026-01-29 09:35:06.687431617 +0000 UTC m=+0.110965619 container start 3b46f1f69eb7fe26efdd27fa2217f5582176af22d7ec00d4e843ad05f8d91f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 29 09:35:06 compute-0 gracious_yonath[241824]: 167 167
Jan 29 09:35:06 compute-0 systemd[1]: libpod-3b46f1f69eb7fe26efdd27fa2217f5582176af22d7ec00d4e843ad05f8d91f02.scope: Deactivated successfully.
Jan 29 09:35:06 compute-0 podman[241808]: 2026-01-29 09:35:06.692499145 +0000 UTC m=+0.116033147 container attach 3b46f1f69eb7fe26efdd27fa2217f5582176af22d7ec00d4e843ad05f8d91f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 29 09:35:06 compute-0 podman[241808]: 2026-01-29 09:35:06.692973418 +0000 UTC m=+0.116507420 container died 3b46f1f69eb7fe26efdd27fa2217f5582176af22d7ec00d4e843ad05f8d91f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 29 09:35:06 compute-0 podman[241808]: 2026-01-29 09:35:06.598072367 +0000 UTC m=+0.021606389 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:35:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-43c5d205a72763647049240d92885c99428ec78f564a1512c48b4a064527fb9c-merged.mount: Deactivated successfully.
Jan 29 09:35:06 compute-0 podman[241808]: 2026-01-29 09:35:06.725957895 +0000 UTC m=+0.149491897 container remove 3b46f1f69eb7fe26efdd27fa2217f5582176af22d7ec00d4e843ad05f8d91f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 09:35:06 compute-0 systemd[1]: libpod-conmon-3b46f1f69eb7fe26efdd27fa2217f5582176af22d7ec00d4e843ad05f8d91f02.scope: Deactivated successfully.
Jan 29 09:35:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v779: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:06 compute-0 podman[241846]: 2026-01-29 09:35:06.863879806 +0000 UTC m=+0.040101702 container create 64848ecb66f34665aaad945be5505e09b87d418940d6bf3bdfe2018e649339b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:35:06 compute-0 systemd[1]: Started libpod-conmon-64848ecb66f34665aaad945be5505e09b87d418940d6bf3bdfe2018e649339b6.scope.
Jan 29 09:35:06 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:35:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7935efab9d291930f4e9d0dd1433164a05f2d5e94d5db42211d005a624abdb85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:35:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7935efab9d291930f4e9d0dd1433164a05f2d5e94d5db42211d005a624abdb85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:35:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7935efab9d291930f4e9d0dd1433164a05f2d5e94d5db42211d005a624abdb85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:35:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7935efab9d291930f4e9d0dd1433164a05f2d5e94d5db42211d005a624abdb85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:35:06 compute-0 podman[241846]: 2026-01-29 09:35:06.846873783 +0000 UTC m=+0.023095719 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:35:06 compute-0 podman[241846]: 2026-01-29 09:35:06.957501972 +0000 UTC m=+0.133723868 container init 64848ecb66f34665aaad945be5505e09b87d418940d6bf3bdfe2018e649339b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 09:35:06 compute-0 podman[241846]: 2026-01-29 09:35:06.963773723 +0000 UTC m=+0.139995619 container start 64848ecb66f34665aaad945be5505e09b87d418940d6bf3bdfe2018e649339b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:35:06 compute-0 podman[241846]: 2026-01-29 09:35:06.967650278 +0000 UTC m=+0.143872384 container attach 64848ecb66f34665aaad945be5505e09b87d418940d6bf3bdfe2018e649339b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]: {
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:     "0": [
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:         {
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "devices": [
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "/dev/loop3"
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             ],
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "lv_name": "ceph_lv0",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "lv_size": "21470642176",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "name": "ceph_lv0",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "tags": {
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.cluster_name": "ceph",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.crush_device_class": "",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.encrypted": "0",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.objectstore": "bluestore",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.osd_id": "0",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.type": "block",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.vdo": "0",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.with_tpm": "0"
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             },
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "type": "block",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "vg_name": "ceph_vg0"
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:         }
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:     ],
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:     "1": [
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:         {
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "devices": [
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "/dev/loop4"
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             ],
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "lv_name": "ceph_lv1",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "lv_size": "21470642176",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "name": "ceph_lv1",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "tags": {
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.cluster_name": "ceph",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.crush_device_class": "",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.encrypted": "0",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.objectstore": "bluestore",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.osd_id": "1",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.type": "block",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.vdo": "0",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.with_tpm": "0"
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             },
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "type": "block",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "vg_name": "ceph_vg1"
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:         }
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:     ],
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:     "2": [
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:         {
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "devices": [
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "/dev/loop5"
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             ],
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "lv_name": "ceph_lv2",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "lv_size": "21470642176",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "name": "ceph_lv2",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "tags": {
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.cluster_name": "ceph",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.crush_device_class": "",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.encrypted": "0",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.objectstore": "bluestore",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.osd_id": "2",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.type": "block",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.vdo": "0",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:                 "ceph.with_tpm": "0"
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             },
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "type": "block",
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:             "vg_name": "ceph_vg2"
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:         }
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]:     ]
Jan 29 09:35:07 compute-0 intelligent_sammet[241862]: }
Jan 29 09:35:07 compute-0 systemd[1]: libpod-64848ecb66f34665aaad945be5505e09b87d418940d6bf3bdfe2018e649339b6.scope: Deactivated successfully.
Jan 29 09:35:07 compute-0 podman[241846]: 2026-01-29 09:35:07.291031623 +0000 UTC m=+0.467253519 container died 64848ecb66f34665aaad945be5505e09b87d418940d6bf3bdfe2018e649339b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 09:35:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-7935efab9d291930f4e9d0dd1433164a05f2d5e94d5db42211d005a624abdb85-merged.mount: Deactivated successfully.
Jan 29 09:35:07 compute-0 podman[241846]: 2026-01-29 09:35:07.327113534 +0000 UTC m=+0.503335430 container remove 64848ecb66f34665aaad945be5505e09b87d418940d6bf3bdfe2018e649339b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sammet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:35:07 compute-0 systemd[1]: libpod-conmon-64848ecb66f34665aaad945be5505e09b87d418940d6bf3bdfe2018e649339b6.scope: Deactivated successfully.
Jan 29 09:35:07 compute-0 sudo[241771]: pam_unix(sudo:session): session closed for user root
Jan 29 09:35:07 compute-0 sudo[241883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:35:07 compute-0 sudo[241883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:35:07 compute-0 sudo[241883]: pam_unix(sudo:session): session closed for user root
Jan 29 09:35:07 compute-0 sudo[241908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:35:07 compute-0 sudo[241908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:35:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:35:07 compute-0 podman[241945]: 2026-01-29 09:35:07.726894308 +0000 UTC m=+0.034493550 container create b247024207b0e237c81639668cd006412de053a8d84f9fb9b913b99478d34536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chaum, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:35:07 compute-0 systemd[1]: Started libpod-conmon-b247024207b0e237c81639668cd006412de053a8d84f9fb9b913b99478d34536.scope.
Jan 29 09:35:07 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:35:07 compute-0 podman[241945]: 2026-01-29 09:35:07.795078962 +0000 UTC m=+0.102678224 container init b247024207b0e237c81639668cd006412de053a8d84f9fb9b913b99478d34536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chaum, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:35:07 compute-0 podman[241945]: 2026-01-29 09:35:07.800108699 +0000 UTC m=+0.107707941 container start b247024207b0e237c81639668cd006412de053a8d84f9fb9b913b99478d34536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chaum, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:35:07 compute-0 objective_chaum[241961]: 167 167
Jan 29 09:35:07 compute-0 systemd[1]: libpod-b247024207b0e237c81639668cd006412de053a8d84f9fb9b913b99478d34536.scope: Deactivated successfully.
Jan 29 09:35:07 compute-0 podman[241945]: 2026-01-29 09:35:07.804027055 +0000 UTC m=+0.111626317 container attach b247024207b0e237c81639668cd006412de053a8d84f9fb9b913b99478d34536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chaum, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:35:07 compute-0 conmon[241961]: conmon b247024207b0e237c816 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b247024207b0e237c81639668cd006412de053a8d84f9fb9b913b99478d34536.scope/container/memory.events
Jan 29 09:35:07 compute-0 podman[241945]: 2026-01-29 09:35:07.804659633 +0000 UTC m=+0.112258875 container died b247024207b0e237c81639668cd006412de053a8d84f9fb9b913b99478d34536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:35:07 compute-0 podman[241945]: 2026-01-29 09:35:07.712162477 +0000 UTC m=+0.019761719 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:35:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0ecb84e1df5137b61c5adefb14af2aac5a112d84f6b5db6d6863cccd36df636-merged.mount: Deactivated successfully.
Jan 29 09:35:07 compute-0 podman[241945]: 2026-01-29 09:35:07.83767075 +0000 UTC m=+0.145269992 container remove b247024207b0e237c81639668cd006412de053a8d84f9fb9b913b99478d34536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 09:35:07 compute-0 systemd[1]: libpod-conmon-b247024207b0e237c81639668cd006412de053a8d84f9fb9b913b99478d34536.scope: Deactivated successfully.
Jan 29 09:35:07 compute-0 podman[241985]: 2026-01-29 09:35:07.948849534 +0000 UTC m=+0.030352156 container create 47d7c7f7ee0d649660043c8dba3149808521ea122c0667f9b027c65eb117b4d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 29 09:35:07 compute-0 systemd[1]: Started libpod-conmon-47d7c7f7ee0d649660043c8dba3149808521ea122c0667f9b027c65eb117b4d1.scope.
Jan 29 09:35:08 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:35:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a183fd7464d724dc8538efd37b26cb30dbb1b0a3bbd9ccf3bb5bd96b8abef964/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:35:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a183fd7464d724dc8538efd37b26cb30dbb1b0a3bbd9ccf3bb5bd96b8abef964/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:35:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a183fd7464d724dc8538efd37b26cb30dbb1b0a3bbd9ccf3bb5bd96b8abef964/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:35:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a183fd7464d724dc8538efd37b26cb30dbb1b0a3bbd9ccf3bb5bd96b8abef964/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:35:08 compute-0 podman[241985]: 2026-01-29 09:35:07.935852721 +0000 UTC m=+0.017355353 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:35:08 compute-0 podman[241985]: 2026-01-29 09:35:08.034069002 +0000 UTC m=+0.115571644 container init 47d7c7f7ee0d649660043c8dba3149808521ea122c0667f9b027c65eb117b4d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_dewdney, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:35:08 compute-0 podman[241985]: 2026-01-29 09:35:08.03988024 +0000 UTC m=+0.121382862 container start 47d7c7f7ee0d649660043c8dba3149808521ea122c0667f9b027c65eb117b4d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_dewdney, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:35:08 compute-0 podman[241985]: 2026-01-29 09:35:08.043241372 +0000 UTC m=+0.124743994 container attach 47d7c7f7ee0d649660043c8dba3149808521ea122c0667f9b027c65eb117b4d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_dewdney, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 09:35:08 compute-0 ceph-mon[75183]: pgmap v779: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:08 compute-0 lvm[242079]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:35:08 compute-0 lvm[242079]: VG ceph_vg0 finished
Jan 29 09:35:08 compute-0 lvm[242080]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:35:08 compute-0 lvm[242080]: VG ceph_vg1 finished
Jan 29 09:35:08 compute-0 lvm[242082]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:35:08 compute-0 lvm[242082]: VG ceph_vg2 finished
Jan 29 09:35:08 compute-0 objective_dewdney[242001]: {}
Jan 29 09:35:08 compute-0 podman[241985]: 2026-01-29 09:35:08.810234672 +0000 UTC m=+0.891737294 container died 47d7c7f7ee0d649660043c8dba3149808521ea122c0667f9b027c65eb117b4d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_dewdney, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:35:08 compute-0 systemd[1]: libpod-47d7c7f7ee0d649660043c8dba3149808521ea122c0667f9b027c65eb117b4d1.scope: Deactivated successfully.
Jan 29 09:35:08 compute-0 systemd[1]: libpod-47d7c7f7ee0d649660043c8dba3149808521ea122c0667f9b027c65eb117b4d1.scope: Consumed 1.098s CPU time.
Jan 29 09:35:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v780: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a183fd7464d724dc8538efd37b26cb30dbb1b0a3bbd9ccf3bb5bd96b8abef964-merged.mount: Deactivated successfully.
Jan 29 09:35:08 compute-0 podman[241985]: 2026-01-29 09:35:08.853117369 +0000 UTC m=+0.934619991 container remove 47d7c7f7ee0d649660043c8dba3149808521ea122c0667f9b027c65eb117b4d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_dewdney, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 09:35:08 compute-0 systemd[1]: libpod-conmon-47d7c7f7ee0d649660043c8dba3149808521ea122c0667f9b027c65eb117b4d1.scope: Deactivated successfully.
Jan 29 09:35:08 compute-0 sudo[241908]: pam_unix(sudo:session): session closed for user root
Jan 29 09:35:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:35:08 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:35:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:35:08 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:35:08 compute-0 sudo[242099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:35:08 compute-0 sudo[242099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:35:08 compute-0 sudo[242099]: pam_unix(sudo:session): session closed for user root
Jan 29 09:35:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:35:09.037 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:35:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:35:09.039 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:35:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:35:09.040 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:35:09 compute-0 ceph-mon[75183]: pgmap v780: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:09 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:35:09 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:35:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v781: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:11 compute-0 ceph-mon[75183]: pgmap v781: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:35:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v782: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:13 compute-0 ceph-mon[75183]: pgmap v782: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v783: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:15 compute-0 ceph-mon[75183]: pgmap v783: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v784: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:17 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:35:18 compute-0 ceph-mon[75183]: pgmap v784: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v785: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:20 compute-0 ceph-mon[75183]: pgmap v785: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:20 compute-0 podman[242124]: 2026-01-29 09:35:20.150832771 +0000 UTC m=+0.089380302 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 09:35:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v786: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:22 compute-0 ceph-mon[75183]: pgmap v786: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:35:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v787: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:23 compute-0 podman[242150]: 2026-01-29 09:35:23.097822712 +0000 UTC m=+0.042793065 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 09:35:24 compute-0 ceph-mon[75183]: pgmap v787: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v788: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:26 compute-0 sshd-session[242169]: Accepted publickey for zuul from 192.168.122.10 port 51804 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:35:26 compute-0 ceph-mon[75183]: pgmap v788: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:26 compute-0 systemd-logind[799]: New session 51 of user zuul.
Jan 29 09:35:26 compute-0 systemd[1]: Started Session 51 of User zuul.
Jan 29 09:35:26 compute-0 sshd-session[242169]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:35:26 compute-0 sudo[242173]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 29 09:35:26 compute-0 sudo[242173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:35:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:35:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:35:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:35:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:35:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:35:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:35:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v789: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:35:28 compute-0 ceph-mon[75183]: pgmap v789: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:28 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14692 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v790: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:28 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14694 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:29 compute-0 ceph-mon[75183]: from='client.14692 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 29 09:35:29 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3644139592' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 29 09:35:30 compute-0 ceph-mon[75183]: pgmap v790: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:30 compute-0 ceph-mon[75183]: from='client.14694 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:30 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3644139592' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 29 09:35:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v791: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:32 compute-0 ceph-mon[75183]: pgmap v791: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:35:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v792: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:33 compute-0 ovs-vsctl[242471]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 29 09:35:34 compute-0 ceph-mon[75183]: pgmap v792: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:34 compute-0 virtqemud[236585]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 29 09:35:34 compute-0 virtqemud[236585]: hostname: compute-0
Jan 29 09:35:34 compute-0 virtqemud[236585]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 29 09:35:34 compute-0 virtqemud[236585]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 29 09:35:34 compute-0 virtqemud[236585]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 29 09:35:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v793: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:34 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: cache status {prefix=cache status} (starting...)
Jan 29 09:35:35 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: client ls {prefix=client ls} (starting...)
Jan 29 09:35:35 compute-0 lvm[242796]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:35:35 compute-0 lvm[242796]: VG ceph_vg1 finished
Jan 29 09:35:35 compute-0 lvm[242828]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:35:35 compute-0 lvm[242828]: VG ceph_vg2 finished
Jan 29 09:35:35 compute-0 lvm[242835]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:35:35 compute-0 lvm[242835]: VG ceph_vg0 finished
Jan 29 09:35:35 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14698 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:35 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: damage ls {prefix=damage ls} (starting...)
Jan 29 09:35:35 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: dump loads {prefix=dump loads} (starting...)
Jan 29 09:35:35 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14700 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:35 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 29 09:35:36 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 29 09:35:36 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 29 09:35:36 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 29 09:35:36 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2223875473' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 29 09:35:36 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 29 09:35:36 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14704 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:36 compute-0 ceph-mon[75183]: pgmap v793: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:36 compute-0 ceph-mon[75183]: from='client.14698 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:36 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2223875473' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 29 09:35:36 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 29 09:35:36 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:35:36 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1153569660' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:35:36 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 29 09:35:36 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14708 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:36 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 29 09:35:36 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-ucpkkb[75469]: 2026-01-29T09:35:36.773+0000 7f5f5ebc1640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 29 09:35:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v794: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:36 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: ops {prefix=ops} (starting...)
Jan 29 09:35:36 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 29 09:35:37 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2162874160' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 29 09:35:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 29 09:35:37 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/600060469' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 29 09:35:37 compute-0 ceph-mon[75183]: from='client.14700 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:37 compute-0 ceph-mon[75183]: from='client.14704 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:37 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1153569660' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:35:37 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2162874160' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 29 09:35:37 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/600060469' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 29 09:35:37 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: session ls {prefix=session ls} (starting...)
Jan 29 09:35:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 29 09:35:37 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3253148879' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 29 09:35:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:35:37 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: status {prefix=status} (starting...)
Jan 29 09:35:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 29 09:35:37 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2652468455' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 29 09:35:38 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14718 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 29 09:35:38 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2507780457' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 29 09:35:38 compute-0 ceph-mon[75183]: from='client.14708 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:38 compute-0 ceph-mon[75183]: pgmap v794: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:38 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3253148879' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 29 09:35:38 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2652468455' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 29 09:35:38 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2507780457' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 29 09:35:38 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14722 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 29 09:35:38 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1315065782' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 29 09:35:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v795: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 29 09:35:38 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2342026095' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 29 09:35:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 29 09:35:39 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2103008806' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 29 09:35:39 compute-0 ceph-mon[75183]: from='client.14718 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:39 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1315065782' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 29 09:35:39 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2342026095' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 29 09:35:39 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2103008806' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 29 09:35:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 29 09:35:39 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/13669570' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 29 09:35:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 29 09:35:39 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/324622847' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 29 09:35:40 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14734 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:40 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 29 09:35:40 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-ucpkkb[75469]: 2026-01-29T09:35:40.042+0000 7f5f5ebc1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 29 09:35:40 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 29 09:35:40 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/988476118' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 29 09:35:40 compute-0 ceph-mon[75183]: from='client.14722 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:40 compute-0 ceph-mon[75183]: pgmap v795: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:40 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/13669570' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 29 09:35:40 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/324622847' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 29 09:35:40 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/988476118' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 29 09:35:40 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 29 09:35:40 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2333061109' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 29 09:35:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v796: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:40 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14740 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007672 3 0.000117
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1e( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008294 3 0.000184
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1e( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1e( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1e( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008041 3 0.000147
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007247 3 0.000309
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007165 3 0.000205
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007214 3 0.000145
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007221 3 0.000200
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007115 3 0.000684
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007126 3 0.000132
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007120 3 0.000139
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006996 3 0.000136
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.0( empty local-lis/les=39/40 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.e( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.10( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.14( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015178 3 0.000185
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015052 3 0.000610
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015059 3 0.000133
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.0( empty local-lis/les=39/40 n=0 ec=19/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015071 3 0.000068
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.0( empty local-lis/les=39/40 n=0 ec=19/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.0( empty local-lis/les=39/40 n=0 ec=19/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.0( empty local-lis/les=39/40 n=0 ec=19/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.014820 3 0.000139
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.014707 3 0.000227
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.12( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.014700 3 0.000174
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.014612 3 0.000129
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.e( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.014920 3 0.000429
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.e( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.10( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.014645 3 0.000394
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.10( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.e( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.e( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.10( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.10( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.014546 3 0.000112
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.014487 3 0.000271
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.014466 3 0.000120
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.14( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.014483 3 0.000305
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.014392 3 0.000097
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.14( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.14( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000036 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.14( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.014510 3 0.000120
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.014430 3 0.000099
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.014401 3 0.000078
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.014265 3 0.000088
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.12( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.014258 3 0.000893
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.12( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.12( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 40 pg[2.12( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/19 les/c/f=40/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:29.622205+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 57556992 unmapped: 3252224 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 40 handle_osd_map epochs [40,41], i have 40, src has [1,41]
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=25) [2] r=0 lpr=25 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 16.719444 37 0.000534
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=25) [2] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 16.765106 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=25) [2] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 17.945400 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=25) [2] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] exit Started 17.945440 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=25) [2] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=15.280164719s) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active pruub 68.130348206s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.1(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.2(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.3(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.4(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.5(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.6(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.7(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.8(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.9(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.a(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.b(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.c(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.d(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.e(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.f(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.10(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.11(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.12(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.13(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.14(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.15(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.16(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.17(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.18(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.19(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.1a(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.1b(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.1c(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.1d(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.1e(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.1f(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=15.280164719s) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown pruub 68.130348206s@ mbc={}] exit Reset 0.002517 1 0.000196
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=15.280164719s) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown pruub 68.130348206s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=15.280164719s) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown pruub 68.130348206s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=15.280164719s) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown pruub 68.130348206s@ mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=15.280164719s) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown pruub 68.130348206s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=15.280164719s) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown pruub 68.130348206s@ mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=15.280164719s) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown pruub 68.130348206s@ mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=15.280164719s) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering pruub 68.130348206s@ mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=15.280164719s) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering pruub 68.130348206s@ mbc={}] exit Started/Primary/Peering/GetInfo 0.000017 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=15.280164719s) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering pruub 68.130348206s@ mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=15.280164719s) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering pruub 68.130348206s@ mbc={}] exit Started/Primary/Peering/GetLog 0.000031 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=15.280164719s) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering pruub 68.130348206s@ mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=15.280164719s) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering pruub 68.130348206s@ mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=15.280164719s) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering pruub 68.130348206s@ mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.002751 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.002739 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.001412 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.001546 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.001514 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.001433 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.003582 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.003551 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.003547 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.003482 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.001992 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.001965 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.004046 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.002800 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.002825 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.002881 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.002923 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.002948 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.002957 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.002960 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.002951 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.002970 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.002975 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.004114 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.004083 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.004053 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.004079 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.004041 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.004006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.004717 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.004656 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=0 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:30.622356+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 58785792 unmapped: 2023424 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 41 handle_osd_map epochs [41,42], i have 41, src has [1,42]
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.008423 4 0.000076
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000021 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000044 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000091 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.008231 4 0.000065
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.008416 4 0.000017
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000021 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000010 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000034 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000056 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000070 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.008525 4 0.000016
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000010 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.008080 4 0.000047
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.008035 4 0.000041
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000042 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000033 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000030 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.007995 4 0.000023
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.007898 4 0.000023
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000044 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000030 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.007865 4 0.000039
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000054 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000013 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.007882 4 0.000026
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.007859 4 0.000019
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000062 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000033 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000054 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.007913 4 0.000014
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.007308 4 0.000017
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000039 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000012 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.007334 4 0.000014
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000064 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000048 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000023 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000012 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.007857 4 0.000022
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000070 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.007851 4 0.000022
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000048 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000012 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.009638 4 0.000013
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000044 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=15.280164719s) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering pruub 68.130348206s@ mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010503 3 0.000126
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=15.280164719s) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering pruub 68.130348206s@ mbc={}] exit Started/Primary/Peering 1.010599 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41 pruub=15.280164719s) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown pruub 68.130348206s@ mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.0( empty local-lis/les=41/42 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000040 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.009799 4 0.000034
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.009182 4 0.000016
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000010 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000043 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000014 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.009228 4 0.000016
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000046 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000107 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.009374 4 0.000015
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000067 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000014 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.009481 4 0.000017
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000051 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000045 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000010 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000037 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.009280 4 0.000022
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.008187 4 0.000029
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000058 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000025 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000010 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000039 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.008370 4 0.000032
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.008096 4 0.000024
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.008521 4 0.000019
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000012 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000047 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000013 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000049 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.009699 4 0.000032
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000010 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000051 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.009832 4 0.000016
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.008913 4 0.000026
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.009013 4 0.000016
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000010 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000043 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000044 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000077 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000400 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000200 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000010 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000017 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000043 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=25/26 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002562 3 0.000226
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006604 3 0.000335
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006496 3 0.000103
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006492 3 0.000076
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006397 3 0.000096
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000074 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006929 3 0.000137
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007155 3 0.000105
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007096 3 0.000076
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007208 3 0.000160
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007038 3 0.000130
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007047 3 0.000114
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006962 3 0.000145
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006951 3 0.000104
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007196 3 0.000184
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.0( empty local-lis/les=41/42 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.0( empty local-lis/les=41/42 n=0 ec=25/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007570 3 0.000074
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.0( empty local-lis/les=41/42 n=0 ec=25/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.0( empty local-lis/les=41/42 n=0 ec=25/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.0( empty local-lis/les=41/42 n=0 ec=25/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007625 3 0.000128
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000111 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007982 3 0.000641
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007839 3 0.000101
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007764 3 0.000105
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007607 3 0.000181
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007694 3 0.000159
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007704 3 0.000141
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007555 3 0.000130
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007672 3 0.000116
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007522 3 0.000109
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007556 3 0.000103
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007551 3 0.000296
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007369 3 0.000104
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007523 3 0.000121
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007380 3 0.000203
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=25/25 les/c/f=26/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007364 3 0.000503
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007425 3 0.000293
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/25 les/c/f=42/26/0 sis=41) [2] r=0 lpr=41 pi=[25,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:31.622956+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 42 heartbeat osd_stat(store_statfs(0x4fe160000/0x0/0x4ffc00000, data 0x284ce/0x68000, compress 0x0/0x0/0x0, omap 0x40d7, meta 0x1a2bf29), peers [0,1] op hist [])
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 58990592 unmapped: 2867200 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 42 heartbeat osd_stat(store_statfs(0x4fe160000/0x0/0x4ffc00000, data 0x284ce/0x68000, compress 0x0/0x0/0x0, omap 0x40d7, meta 0x1a2bf29), peers [0,1] op hist [])
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:32.623152+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 42 handle_osd_map epochs [43,43], i have 42, src has [1,43]
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 59031552 unmapped: 2826240 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 324918 data_alloc: 218103808 data_used: 252
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:33.623315+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 59088896 unmapped: 2768896 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 43 handle_osd_map epochs [44,44], i have 43, src has [1,44]
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:34.623455+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 59236352 unmapped: 2621440 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:35.623794+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 59236352 unmapped: 2621440 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 44 handle_osd_map epochs [45,45], i have 44, src has [1,45]
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 44 handle_osd_map epochs [44,45], i have 45, src has [1,45]
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.128208 13 0.000155
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.142667 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.142714 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.142755 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.871208191s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.814163208s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.073814 7 0.000090
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.080499 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.080602 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.080665 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.871096611s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.814163208s@ mbc={}] exit Reset 0.000173 1 0.000251
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.871096611s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.814163208s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.871096611s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.814163208s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.871096611s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.814163208s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.871096611s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.814163208s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.871096611s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.814163208s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.073615 7 0.000045
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.080611 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.080683 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925963402s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.869056702s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.080719 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925905228s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869056702s@ mbc={}] exit Reset 0.000108 1 0.000201
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925905228s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869056702s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925905228s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869056702s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925905228s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869056702s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925905228s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869056702s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925905228s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869056702s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926049232s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.869293213s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926024437s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869293213s@ mbc={}] exit Reset 0.000103 1 0.000188
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926024437s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869293213s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926024437s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869293213s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926024437s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869293213s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926024437s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869293213s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926024437s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869293213s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.128796 13 0.000084
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.143235 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.143286 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.143317 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870428085s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.813789368s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.128808 13 0.000083
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870413780s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813789368s@ mbc={}] exit Reset 0.000028 1 0.000060
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.143350 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870413780s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813789368s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.143414 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870413780s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813789368s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870413780s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813789368s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870413780s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813789368s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870413780s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813789368s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.143444 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870702744s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.814125061s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.128977 13 0.000098
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.143475 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.143547 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.143571 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870686531s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.814125061s@ mbc={}] exit Reset 0.000046 1 0.000064
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870686531s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.814125061s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870686531s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.814125061s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870686531s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.814125061s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870686531s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.814125061s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870686531s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.814125061s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870272636s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.813743591s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.129131 13 0.000105
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.143712 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.143778 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870223999s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813743591s@ mbc={}] exit Reset 0.000070 1 0.000067
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870223999s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813743591s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.143802 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870223999s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813743591s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870223999s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813743591s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870223999s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813743591s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870223999s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813743591s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870153427s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.813697815s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.074363 7 0.000042
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870141029s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813697815s@ mbc={}] exit Reset 0.000028 1 0.000072
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.080882 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870141029s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813697815s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.080929 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870141029s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813697815s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.080964 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870141029s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813697815s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870141029s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813697815s@ mbc={}] exit Start 0.000028 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870141029s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813697815s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.128949 13 0.000192
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.143245 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.143288 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925490379s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.869110107s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.143314 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925471306s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869110107s@ mbc={}] exit Reset 0.000042 1 0.000088
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.074442 7 0.000069
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925471306s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869110107s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.080871 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925471306s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869110107s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925471306s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869110107s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.080929 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870522499s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.814178467s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925471306s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869110107s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925471306s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869110107s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.080954 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870504379s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.814178467s@ mbc={}] exit Reset 0.000038 1 0.000064
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870504379s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.814178467s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870504379s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.814178467s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870504379s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.814178467s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870504379s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.814178467s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.870504379s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.814178467s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925465584s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.869155884s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925437927s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869155884s@ mbc={}] exit Reset 0.000048 1 0.000075
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925437927s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869155884s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925437927s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869155884s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925437927s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869155884s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.073762 7 0.000054
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925437927s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869155884s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.080949 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925437927s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869155884s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.129367 13 0.000076
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.081016 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.143893 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.081036 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.143981 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.144015 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926127434s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.869911194s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926114082s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869911194s@ mbc={}] exit Reset 0.000031 1 0.000059
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926114082s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869911194s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869931221s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.813735962s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926114082s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869911194s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926114082s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869911194s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926114082s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869911194s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.073682 7 0.000047
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926114082s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869911194s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.080925 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.081013 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.081045 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869914055s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813735962s@ mbc={}] exit Reset 0.000056 1 0.000068
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869914055s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813735962s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869914055s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813735962s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869914055s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813735962s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869914055s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813735962s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926186562s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.870048523s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869914055s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813735962s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926169395s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870048523s@ mbc={}] exit Reset 0.000036 1 0.000066
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926169395s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870048523s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926169395s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870048523s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926169395s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870048523s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926169395s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870048523s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926169395s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870048523s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.073894 7 0.000080
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.081015 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.081058 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.081080 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.129666 13 0.000167
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.144309 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.073732 7 0.000040
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.144376 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.080957 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.081027 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926018715s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.869964600s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.144407 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.081078 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925992012s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869964600s@ mbc={}] exit Reset 0.000044 1 0.000068
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925992012s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869964600s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925992012s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869964600s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925992012s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869964600s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925992012s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869964600s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925992012s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.869964600s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869600296s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.813606262s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869582176s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813606262s@ mbc={}] exit Reset 0.000063 1 0.000092
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926094055s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.870071411s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926009178s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870071411s@ mbc={}] exit Reset 0.000098 1 0.000122
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869582176s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813606262s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926009178s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870071411s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926009178s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870071411s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926009178s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870071411s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869582176s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813606262s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926009178s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870071411s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869582176s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813606262s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.926009178s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870071411s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869582176s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813606262s@ mbc={}] exit Start 0.000018 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869582176s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813606262s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.129865 13 0.000072
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.144601 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.144686 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.144740 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869405746s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.813545227s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869391441s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813545227s@ mbc={}] exit Reset 0.000031 1 0.000052
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869391441s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813545227s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869391441s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813545227s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869391441s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813545227s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869391441s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813545227s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869391441s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813545227s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.074001 7 0.000190
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.080994 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.081078 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.081120 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925883293s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.870101929s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925864220s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870101929s@ mbc={}] exit Reset 0.000038 1 0.000064
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925864220s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870101929s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925864220s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870101929s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925864220s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870101929s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925864220s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870101929s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925864220s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870101929s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.130256 13 0.000067
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.144998 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.145176 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.145197 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869065285s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.813446045s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869050980s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813446045s@ mbc={}] exit Reset 0.000030 1 0.000053
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869050980s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813446045s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869050980s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813446045s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869050980s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813446045s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869050980s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813446045s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.869050980s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813446045s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.074143 7 0.000079
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.082011 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.082065 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.131403 13 0.000101
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.082099 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.146507 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.146575 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.146617 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925468445s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.870857239s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925449371s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870857239s@ mbc={}] exit Reset 0.000042 1 0.000072
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925449371s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870857239s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925449371s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870857239s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925449371s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870857239s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925449371s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870857239s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867900848s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.813331604s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925449371s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.870857239s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867880821s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813331604s@ mbc={}] exit Reset 0.000057 1 0.000092
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867880821s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813331604s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867880821s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813331604s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867880821s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813331604s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.131605 13 0.000101
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867880821s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813331604s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.146847 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867880821s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813331604s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.146914 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.146963 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867644310s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.813186646s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.074303 7 0.000038
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.081941 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.082008 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867627144s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813186646s@ mbc={}] exit Reset 0.000039 1 0.000089
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867627144s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813186646s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867627144s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813186646s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867627144s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813186646s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.082097 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867627144s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813186646s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.139961 13 0.000218
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867627144s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813186646s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.147126 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.147205 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.147251 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859783173s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.805412292s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.074389 7 0.000045
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.082114 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.082231 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859755516s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805412292s@ mbc={}] exit Reset 0.000055 1 0.000086
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.082253 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859755516s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805412292s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859755516s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805412292s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859755516s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805412292s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859755516s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805412292s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859755516s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805412292s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925496101s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.871192932s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925479889s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871192932s@ mbc={}] exit Reset 0.000046 1 0.000071
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925479889s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871192932s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925479889s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871192932s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925479889s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871192932s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925479889s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871192932s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925479889s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871192932s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925609589s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.871208191s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.140202 13 0.000182
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.147355 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.147441 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.147470 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859578133s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.805381775s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859561920s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805381775s@ mbc={}] exit Reset 0.000037 1 0.000063
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.074468 7 0.000037
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859561920s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805381775s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.082203 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859561920s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805381775s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.082279 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859561920s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805381775s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859561920s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805381775s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859561920s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805381775s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.082315 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925427437s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.871307373s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925411224s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871307373s@ mbc={}] exit Reset 0.000039 1 0.000071
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925411224s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871307373s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925411224s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871307373s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925411224s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871307373s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.132034 13 0.000103
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925411224s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871307373s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.147135 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.147448 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925411224s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871307373s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.147632 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867284775s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.813247681s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925439835s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871208191s@ mbc={}] exit Reset 0.000194 1 0.000224
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925439835s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871208191s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925439835s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871208191s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925439835s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871208191s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925439835s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871208191s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925439835s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871208191s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.074612 7 0.000044
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.082313 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.082377 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867267609s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813247681s@ mbc={}] exit Reset 0.000059 1 0.000065
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867267609s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813247681s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.082406 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867267609s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813247681s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867267609s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813247681s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867267609s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813247681s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.867267609s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.813247681s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925297737s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.871330261s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925284386s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871330261s@ mbc={}] exit Reset 0.000031 1 0.000054
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925284386s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871330261s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925284386s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871330261s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925284386s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871330261s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925284386s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871330261s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925284386s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871330261s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.140614 13 0.000068
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.147894 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.147949 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.074713 7 0.000033
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.082293 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.147979 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.082394 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.140588 13 0.000105
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.082421 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.147870 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859183311s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.805305481s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.148010 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.148037 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859161377s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805305481s@ mbc={}] exit Reset 0.000046 1 0.000073
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925175667s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.871322632s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859161377s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805305481s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859185219s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.805335999s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859161377s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805305481s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859161377s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805305481s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859161377s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805305481s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859161377s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805305481s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925143242s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871322632s@ mbc={}] exit Reset 0.000057 1 0.000099
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925143242s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871322632s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925143242s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871322632s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925143242s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871322632s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925143242s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871322632s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925143242s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871322632s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.074662 7 0.000040
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.082243 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.082307 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859100342s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805335999s@ mbc={}] exit Reset 0.000102 1 0.000124
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859100342s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805335999s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859100342s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805335999s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859100342s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805335999s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.082346 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859100342s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805335999s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859100342s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805335999s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.140963 13 0.000068
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.148191 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.148296 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925184250s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.871467590s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.148322 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925166130s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871467590s@ mbc={}] exit Reset 0.000042 1 0.000072
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925166130s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871467590s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858718872s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.805030823s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925166130s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871467590s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925166130s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871467590s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925166130s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871467590s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925166130s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871467590s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858699799s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805030823s@ mbc={}] exit Reset 0.000037 1 0.000069
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858699799s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805030823s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858699799s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805030823s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858699799s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805030823s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858699799s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805030823s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858699799s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805030823s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.141111 13 0.000063
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.148405 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.148527 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.148610 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.140876 13 0.000095
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.148025 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858575821s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.804977417s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.148586 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.074656 7 0.000048
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.148620 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.082052 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858558655s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.804977417s@ mbc={}] exit Reset 0.000045 1 0.000068
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858558655s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.804977417s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.082112 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858903885s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.805351257s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.082541 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858885765s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805351257s@ mbc={}] exit Reset 0.000039 1 0.000069
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858885765s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805351257s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858885765s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805351257s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858885765s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805351257s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925213814s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.871688843s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858885765s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805351257s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858885765s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.805351257s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.141435 13 0.000092
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.149151 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.149200 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925176620s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871688843s@ mbc={}] exit Reset 0.000073 1 0.000097
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.149221 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925176620s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871688843s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925176620s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871688843s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925176620s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871688843s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925176620s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871688843s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925176620s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871688843s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858347893s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.804908752s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858330727s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.804908752s@ mbc={}] exit Reset 0.000040 1 0.000057
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858330727s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.804908752s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858330727s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.804908752s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.141428 13 0.000164
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858330727s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.804908752s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858330727s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.804908752s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.149503 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858330727s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.804908752s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.149555 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.149581 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.074848 7 0.000037
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.082299 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.082512 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858257294s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.804893494s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.082534 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858238220s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.804893494s@ mbc={}] exit Reset 0.000040 1 0.000071
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858238220s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.804893494s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925024033s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.871688843s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858238220s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.804893494s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858558655s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.804977417s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858238220s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.804893494s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858238220s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.804893494s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858558655s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.804977417s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858558655s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.804977417s@ mbc={}] exit Start 0.000249 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925005913s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871688843s@ mbc={}] exit Reset 0.000036 1 0.000063
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925005913s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871688843s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858558655s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.804977417s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925005913s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871688843s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925005913s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871688843s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925005913s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871688843s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.925005913s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871688843s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.075172 7 0.000044
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.082587 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.082674 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.082698 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.858238220s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.804893494s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.924736977s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.871589661s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.924706459s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871589661s@ mbc={}] exit Reset 0.000052 1 0.000078
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.924706459s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871589661s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.924706459s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871589661s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.924706459s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871589661s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.924706459s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871589661s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.924706459s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871589661s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.146781 13 0.000084
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.150765 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.150859 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.150888 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.853088379s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 67.800247192s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.853067398s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.800247192s@ mbc={}] exit Reset 0.000042 1 0.000075
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.853067398s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.800247192s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.853067398s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.800247192s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.853067398s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.800247192s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.853067398s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.800247192s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.853067398s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 67.800247192s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.075626 7 0.000034
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.083022 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.083079 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.083106 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=41) [2] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.924295425s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 69.871582031s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.924279213s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871582031s@ mbc={}] exit Reset 0.000035 1 0.000064
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.924279213s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871582031s@ mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.924279213s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871582031s@ mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.924279213s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871582031s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.924279213s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871582031s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=10.924279213s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 69.871582031s@ mbc={}] enter Started/Stray
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 45 handle_osd_map epochs [45,45], i have 45, src has [1,45]
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 45 handle_osd_map epochs [44,45], i have 45, src has [1,45]
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1c(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000138 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000037
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000158 1 0.000040
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.18(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000069 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000023
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.000036
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.15(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000010
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 1 0.000027
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.14(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000041 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000037
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000047 1 0.000037
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.13(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000039 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000012
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000041 1 0.000035
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.11(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000039 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000012
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 1 0.000030
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.11(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000032 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000020
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000034 1 0.000031
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.16(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000115 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000032
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000229 1 0.000112
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.18(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000061 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000013
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000196 1 0.000037
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.11(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000041 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000012
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000041 1 0.000033
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.15(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000063 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000012
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000111 1 0.000040
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.11(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000043 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000058 1 0.000033
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.e(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000041 1 0.000022
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.8(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000037 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000022 1 0.000026
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000031 1 0.000023
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.5(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000028 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000048 1 0.000022
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000034 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000022
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.5(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000145 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000024 1 0.000041
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000196 1 0.000052
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.7(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000118 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000020
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000114 1 0.000034
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.a(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000034 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000008
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000051 1 0.000026
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.c(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000033 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000028
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.e(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000043 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000016
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000035 1 0.000025
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.2(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000066 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000018
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000022 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000100 1 0.000058
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.8(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000044 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000013
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000029
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1d(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000037 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000013
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000152 1 0.000050
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1e(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000058 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000037 1 0.000024
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.e(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000059 1 0.000023
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.13(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000091 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000020
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000157 1 0.000044
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1a(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000072 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000011
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000084 1 0.000027
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009905 2 0.000051
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006486 2 0.000616
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005120 2 0.000033
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004212 2 0.000017
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004066 2 0.000017
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.f(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000054 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000066 1 0.000035
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000060 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000018
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000062 1 0.000043
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1a(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000045 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000024
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 1 0.000034
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.a(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000034 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000014
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000061 1 0.000030
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005430 2 0.000038
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005987 2 0.000024
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.8(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000289 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000030
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000688 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000152 1 0.000734
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1b(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000077 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000028
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000112 1 0.000067
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1c(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000108 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000018
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000100 1 0.000050
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.1f(unlocked)] enter Initial
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000155 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000035 1 0.000055
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000085 1 0.000048
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009533 2 0.000028
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008769 2 0.000025
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008944 2 0.000016
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013671 2 0.000026
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017960 2 0.000030
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017774 2 0.000022
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017712 2 0.000019
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017584 2 0.000019
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000032 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017465 2 0.000025
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011088 2 0.000022
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009805 2 0.000027
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010623 2 0.000043
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000011 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009595 2 0.000033
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009409 2 0.000036
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009246 2 0.000021
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007190 2 0.000030
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007721 2 0.000414
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008081 2 0.000038
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006706 2 0.000062
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017361 2 0.000113
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017065 2 0.000058
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016905 2 0.000022
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016721 2 0.000019
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016492 2 0.000044
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016334 2 0.000020
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016082 2 0.000051
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017266 2 0.000017
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015293 2 0.000071
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016332 2 0.000019
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb715800
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:36.623934+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 45 heartbeat osd_stat(store_statfs(0x4fe159000/0x0/0x4ffc00000, data 0x2cdc2/0x71000, compress 0x0/0x0/0x0, omap 0x4878, meta 0x1a2b788), peers [0,1] op hist [0,0,0,0,0,0,0,0,15])
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 61513728 unmapped: 344064 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 45 handle_osd_map epochs [46,46], i have 45, src has [1,46]
Jan 29 09:35:40 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.169521332s of 10.375212669s, submitted: 363
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 45 handle_osd_map epochs [46,46], i have 46, src has [1,46]
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982091 2 0.000035
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991594 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986974 2 0.000049
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000773 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1a( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.18( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982350 2 0.000037
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992256 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.f( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982449 2 0.000036
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.993629 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.e( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982383 2 0.000035
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992080 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982345 2 0.000032
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991687 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.a( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982194 2 0.000033
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.990463 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982878 2 0.000067
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000939 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.8( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.15( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982903 2 0.000048
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000760 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.14( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982790 2 0.000067
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000605 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.13( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982808 2 0.000042
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982847 2 0.000093
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000341 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000508 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.11( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.11( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982759 2 0.000040
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982571 2 0.000042
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.993581 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982334 2 0.000046
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989178 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989894 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.13( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.1f( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1c( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993256 2 0.000039
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.998617 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.18( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982964 2 0.000049
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.990869 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993509 2 0.000049
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000288 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.16( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1b( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993414 2 0.000027
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003816 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1c( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992539 2 0.000038
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.998083 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.11( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.988950 2 0.000046
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.998639 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992854 2 0.000035
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.998919 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.15( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.11( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979695 2 0.000026
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996691 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994055 2 0.000033
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.998343 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.a( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.e( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994153 2 0.000035
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.998282 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.8( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.989199 2 0.000025
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979756 2 0.000034
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996393 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.998251 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.2( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.5( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.980210 2 0.000053
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.997820 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.5( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.989452 2 0.000051
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.998298 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.980287 2 0.000026
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.997496 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.7( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.980076 2 0.000039
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996494 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979800 2 0.000502
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.997145 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.8( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.c( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.980277 2 0.000047
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.997068 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.e( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.980134 2 0.000041
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996424 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.1d( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979846 2 0.000049
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995280 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1a( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979892 2 0.000035
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996287 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.1e( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 46 handle_osd_map epochs [46,46], i have 46, src has [1,46]
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.f( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1a( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.18( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.e( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.f( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005908 4 0.000063
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.f( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.f( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.f( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1a( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006059 4 0.000093
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1a( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1a( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.18( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006010 4 0.000153
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1a( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.18( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.18( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.18( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.e( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005916 4 0.000044
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.e( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.e( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.e( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005872 4 0.000058
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.a( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.8( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.15( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.14( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.13( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.11( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.11( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.13( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1c( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.1f( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.a( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006123 4 0.000067
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.a( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.a( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.a( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.8( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006069 4 0.000047
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.8( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.8( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.8( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.15( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006073 4 0.000064
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.15( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.15( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.15( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.14( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006053 4 0.000052
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.14( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.14( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.13( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006030 4 0.000077
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.14( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.13( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.13( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.13( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.11( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005985 4 0.000035
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.11( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.11( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.11( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1c( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005915 4 0.000072
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.11( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006011 4 0.000044
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1c( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.11( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1c( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.11( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1c( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.11( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.13( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005961 4 0.000056
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.13( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.13( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.13( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.1f( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005978 4 0.000046
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.1f( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.1f( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[6.1f( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.18( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.16( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.11( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1b( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.15( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.11( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.a( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1c( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.e( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.8( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.2( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.5( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.5( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.7( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.8( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.c( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.e( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1a( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.1d( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.1e( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.18( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012295 4 0.000074
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.18( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.16( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012142 4 0.000149
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.16( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.18( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.16( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.18( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.16( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.11( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011986 4 0.000087
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.11( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.11( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.11( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1b( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012240 4 0.000148
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1b( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.15( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011749 4 0.000064
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1b( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.15( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[4.1b( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.15( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.15( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.11( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011799 4 0.000047
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.a( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011686 4 0.000051
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.11( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.a( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.11( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.a( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.11( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.a( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1c( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012239 4 0.000376
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1c( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.e( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011756 4 0.000054
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.e( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1c( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1c( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.e( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.e( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.8( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011730 4 0.000054
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.8( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.8( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.2( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011636 4 0.000050
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.8( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.2( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.2( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.2( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011461 4 0.000077
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.7( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011378 4 0.000050
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.7( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.7( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.7( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.5( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011689 4 0.000174
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.5( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.5( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.5( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.c( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011424 4 0.000038
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.c( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.c( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.c( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.e( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011378 4 0.000032
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1a( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011232 4 0.000064
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1a( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1a( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.1a( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.e( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.1d( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011382 4 0.000045
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.1d( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.e( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000035 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[7.e( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.1d( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.1d( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.1e( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011224 4 0.000046
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.1e( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.1e( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.1e( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.5( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011771 4 0.000589
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.5( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.5( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.5( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.8( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011593 4 0.000055
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.8( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.8( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[3.8( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [2] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.042126 7 0.000088
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.041979 7 0.000105
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.042208 7 0.000043
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.042129 7 0.000057
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000124 1 0.000073
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000223 1 0.000027
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000261 1 0.000017
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000403 1 0.000016
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.053093 7 0.000074
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.050476 7 0.000153
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.050880 7 0.000065
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.050275 7 0.000077
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000044 1 0.000092
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.050839 7 0.000288
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.049641 7 0.000049
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.050820 7 0.000050
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.052777 7 0.000050
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.050370 7 0.000063
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.051582 7 0.000065
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.050342 7 0.000232
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.051300 7 0.000063
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.053612 7 0.000061
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000273 1 0.000071
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.051506 7 0.000040
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.053816 7 0.000104
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.053739 7 0.000086
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000338 1 0.000033
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000363 1 0.000047
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000372 1 0.000019
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000365 1 0.000016
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000555 1 0.000223
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000615 1 0.000179
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000612 1 0.000056
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000648 1 0.000025
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000704 1 0.000050
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000750 1 0.000023
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000793 1 0.000021
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000897 1 0.000228
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000915 1 0.000018
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001008 1 0.000135
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.059442 7 0.000079
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.058827 7 0.000078
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.058897 7 0.000069
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.058944 7 0.000070
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.058349 7 0.000047
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.058741 7 0.000112
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.055949 7 0.000294
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000102 1 0.000029
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.058595 7 0.000046
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.058292 7 0.000035
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.056683 7 0.000076
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.056888 7 0.000084
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.059192 7 0.000065
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000160 1 0.000013
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.056435 7 0.000078
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.056089 7 0.000050
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.057335 7 0.000055
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.056489 7 0.000066
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.059647 7 0.000058
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.057129 7 0.000058
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.055936 7 0.000138
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000259 1 0.000011
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.056342 7 0.000067
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.055590 7 0.000037
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.056765 7 0.000076
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.056750 7 0.000056
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000391 1 0.000032
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000430 1 0.000013
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000486 1 0.000115
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000586 1 0.000088
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.13( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.017030 1 0.000039
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000571 1 0.000016
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.13( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.017267 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.13( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.059441 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000615 1 0.000040
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000647 1 0.000014
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000658 1 0.000016
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000828 1 0.000068
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000920 1 0.000013
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000966 1 0.000012
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001053 1 0.000021
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001092 1 0.000015
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001129 1 0.000065
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001208 1 0.000055
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001225 1 0.000030
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001277 1 0.000018
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001324 1 0.000047
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001428 1 0.000288
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000858 1 0.000777
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.020250 1 0.000041
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.020513 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.062572 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.14( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.026636 1 0.000071
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.14( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.026953 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.14( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.069191 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.15( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.033578 1 0.000027
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.15( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.034033 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.15( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.076197 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:37.624083+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 5 sent 3 num 2 unsent 2 sending 2
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:06.682923+0000 osd.2 (osd.2) 4 : cluster [DBG] 5.1c scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:06.693392+0000 osd.2 (osd.2) 5 : cluster [DBG] 5.1c scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.16( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.030762 1 0.000056
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.16( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.030857 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.16( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.084062 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.8( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.037905 1 0.000060
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.8( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.038245 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.8( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.088787 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.3( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.045110 1 0.000056
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.3( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.045482 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.3( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.096392 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.5( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.052517 1 0.000026
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.5( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.052922 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.5( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.103803 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.2( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.060241 1 0.000039
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.2( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.060676 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.2( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.111522 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.067308 1 0.000056
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.067726 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.120527 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.b( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.074515 1 0.000054
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.b( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.075112 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.b( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.125468 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1f( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.082034 1 0.000114
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1f( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.082684 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1f( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.132352 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1c( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.088988 1 0.000057
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1c( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.089647 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1c( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.140043 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.4( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.096793 1 0.000058
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.4( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.097497 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.4( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.148826 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1d( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.103745 1 0.000041
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1d( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.104495 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1d( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.154885 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.111220 1 0.000030
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.112048 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.165696 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1e( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.118627 1 0.000051
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1e( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.119488 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1e( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.173340 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.2( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.126623 1 0.000045
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.2( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.127629 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.2( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.179171 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.19( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.133089 1 0.000039
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.19( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.134046 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.19( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.187813 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.7( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.140308 1 0.000022
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.7( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.141452 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.7( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.193082 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1b( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.143395 1 0.000020
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1b( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.143536 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1b( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.203013 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.12( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.150548 1 0.000026
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.12( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.150744 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.12( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.209601 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.11( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.157751 1 0.000032
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.11( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.158181 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.11( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.217152 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.a( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.165143 1 0.000030
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.a( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.165608 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.a( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.221832 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.9( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.172559 1 0.000036
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.9( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.173197 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.9( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.231585 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.d( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.179718 1 0.000019
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.d( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.180328 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.d( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.238643 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.16( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.187371 1 0.000029
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.16( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.188043 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.16( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.246683 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.194811 1 0.000019
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.195491 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.252204 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.4( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.201724 1 0.000028
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.4( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.202422 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.4( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.259341 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.13( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.209482 1 0.000027
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.13( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.210005 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.13( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.268849 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.17( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.216558 1 0.000053
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.17( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.217447 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.17( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.276674 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1a( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.223598 1 0.000038
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1a( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.224552 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1a( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.280668 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.232278 1 0.000875
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.232573 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.291497 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62078976 unmapped: 827392 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 357455 data_alloc: 218103808 data_used: 252
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.f( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.238257 1 0.000053
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.f( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.239262 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.f( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.295781 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.7( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.245763 1 0.000035
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.7( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.246862 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.7( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.304227 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.19( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.253086 1 0.000052
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.19( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.254224 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.19( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.310185 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.260371 1 0.000045
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.261550 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.318737 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1d( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.268040 1 0.000053
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1d( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.269299 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1d( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.328972 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.c( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.275526 1 0.000038
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.c( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.276805 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.c( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.333182 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.6( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.282513 1 0.000028
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.6( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.283827 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.6( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.340622 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.289842 1 0.000046
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.291212 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.347992 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.18( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.297008 1 0.000031
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.18( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.298472 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.18( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.354082 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.304479 1 0.000018
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.305400 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.362578 0 0.000000
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:38.624324+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  log_queue is 4 last_log 7 sent 5 num 4 unsent 2 sending 2
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:07.912765+0000 osd.2 (osd.2) 6 : cluster [DBG] 5.1f scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:07.923254+0000 osd.2 (osd.2) 7 : cluster [DBG] 5.1f scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 5)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:06.682923+0000 osd.2 (osd.2) 4 : cluster [DBG] 5.1c scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:06.693392+0000 osd.2 (osd.2) 5 : cluster [DBG] 5.1c scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62070784 unmapped: 835584 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:39.624534+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  log_queue is 4 last_log 9 sent 7 num 4 unsent 2 sending 2
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:08.751582+0000 osd.2 (osd.2) 8 : cluster [DBG] 5.10 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:08.762260+0000 osd.2 (osd.2) 9 : cluster [DBG] 5.10 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 7)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:07.912765+0000 osd.2 (osd.2) 6 : cluster [DBG] 5.1f scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:07.923254+0000 osd.2 (osd.2) 7 : cluster [DBG] 5.1f scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 9)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:08.751582+0000 osd.2 (osd.2) 8 : cluster [DBG] 5.10 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:08.762260+0000 osd.2 (osd.2) 9 : cluster [DBG] 5.10 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe14d000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62169088 unmapped: 737280 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:40.624757+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 696320 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:41.624943+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 11 sent 9 num 2 unsent 2 sending 2
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:10.769268+0000 osd.2 (osd.2) 10 : cluster [DBG] 2.14 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:10.779802+0000 osd.2 (osd.2) 11 : cluster [DBG] 2.14 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 860160 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 11)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:10.769268+0000 osd.2 (osd.2) 10 : cluster [DBG] 2.14 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:10.779802+0000 osd.2 (osd.2) 11 : cluster [DBG] 2.14 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:42.625421+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 860160 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 350815 data_alloc: 218103808 data_used: 252
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:43.625668+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 13 sent 11 num 2 unsent 2 sending 2
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:12.786691+0000 osd.2 (osd.2) 12 : cluster [DBG] 2.12 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:12.797191+0000 osd.2 (osd.2) 13 : cluster [DBG] 2.12 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62078976 unmapped: 827392 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 13)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:12.786691+0000 osd.2 (osd.2) 12 : cluster [DBG] 2.12 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:12.797191+0000 osd.2 (osd.2) 13 : cluster [DBG] 2.12 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:44.625883+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 811008 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:45.626055+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 811008 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:46.626471+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 811008 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:47.626684+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 811008 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 350815 data_alloc: 218103808 data_used: 252
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:48.626879+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.759116173s of 11.233061790s, submitted: 219
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62087168 unmapped: 819200 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:49.627028+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 15 sent 13 num 2 unsent 2 sending 2
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:18.791570+0000 osd.2 (osd.2) 14 : cluster [DBG] 2.10 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:18.802105+0000 osd.2 (osd.2) 15 : cluster [DBG] 2.10 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62087168 unmapped: 819200 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 15)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:18.791570+0000 osd.2 (osd.2) 14 : cluster [DBG] 2.10 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:18.802105+0000 osd.2 (osd.2) 15 : cluster [DBG] 2.10 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:50.627420+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 17 sent 15 num 2 unsent 2 sending 2
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:19.780294+0000 osd.2 (osd.2) 16 : cluster [DBG] 5.17 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:19.790771+0000 osd.2 (osd.2) 17 : cluster [DBG] 5.17 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 811008 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:51.627799+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 17)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:19.780294+0000 osd.2 (osd.2) 16 : cluster [DBG] 5.17 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:19.790771+0000 osd.2 (osd.2) 17 : cluster [DBG] 5.17 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 811008 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:52.629983+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 811008 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 355641 data_alloc: 218103808 data_used: 252
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:53.630377+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62111744 unmapped: 794624 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:54.630513+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62111744 unmapped: 794624 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:55.631210+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 19 sent 17 num 2 unsent 2 sending 2
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:24.679091+0000 osd.2 (osd.2) 18 : cluster [DBG] 5.8 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:24.689759+0000 osd.2 (osd.2) 19 : cluster [DBG] 5.8 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62111744 unmapped: 794624 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 19)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:24.679091+0000 osd.2 (osd.2) 18 : cluster [DBG] 5.8 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:24.689759+0000 osd.2 (osd.2) 19 : cluster [DBG] 5.8 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:56.631695+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62119936 unmapped: 786432 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:57.632253+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 21 sent 19 num 2 unsent 2 sending 2
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:26.761995+0000 osd.2 (osd.2) 20 : cluster [DBG] 2.e scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:26.772683+0000 osd.2 (osd.2) 21 : cluster [DBG] 2.e scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62152704 unmapped: 753664 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 362876 data_alloc: 218103808 data_used: 252
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 21)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:26.761995+0000 osd.2 (osd.2) 20 : cluster [DBG] 2.e scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:26.772683+0000 osd.2 (osd.2) 21 : cluster [DBG] 2.e scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:58.633060+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 23 sent 21 num 2 unsent 2 sending 2
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:27.786806+0000 osd.2 (osd.2) 22 : cluster [DBG] 2.1a scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:27.797511+0000 osd.2 (osd.2) 23 : cluster [DBG] 2.1a scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62169088 unmapped: 737280 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 23)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:27.786806+0000 osd.2 (osd.2) 22 : cluster [DBG] 2.1a scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:27.797511+0000 osd.2 (osd.2) 23 : cluster [DBG] 2.1a scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:59.633597+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.918220520s of 11.026761055s, submitted: 10
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62169088 unmapped: 737280 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:00.634258+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 25 sent 23 num 2 unsent 2 sending 2
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:29.818470+0000 osd.2 (osd.2) 24 : cluster [DBG] 2.c scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:29.829105+0000 osd.2 (osd.2) 25 : cluster [DBG] 2.c scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62169088 unmapped: 737280 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 25)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:29.818470+0000 osd.2 (osd.2) 24 : cluster [DBG] 2.c scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:29.829105+0000 osd.2 (osd.2) 25 : cluster [DBG] 2.c scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:01.635087+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 720896 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:02.635701+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:31.766591+0000 osd.2 (osd.2) 26 : cluster [DBG] 5.a scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:31.777217+0000 osd.2 (osd.2) 27 : cluster [DBG] 5.a scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62201856 unmapped: 704512 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 370109 data_alloc: 218103808 data_used: 252
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 27)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:31.766591+0000 osd.2 (osd.2) 26 : cluster [DBG] 5.a scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:31.777217+0000 osd.2 (osd.2) 27 : cluster [DBG] 5.a scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:03.636232+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:32.742665+0000 osd.2 (osd.2) 28 : cluster [DBG] 5.b scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:32.753181+0000 osd.2 (osd.2) 29 : cluster [DBG] 5.b scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 688128 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 29)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:32.742665+0000 osd.2 (osd.2) 28 : cluster [DBG] 5.b scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:32.753181+0000 osd.2 (osd.2) 29 : cluster [DBG] 5.b scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:04.636781+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 688128 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:05.637044+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:34.733322+0000 osd.2 (osd.2) 30 : cluster [DBG] 5.0 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:34.743994+0000 osd.2 (osd.2) 31 : cluster [DBG] 5.0 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 31)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:34.733322+0000 osd.2 (osd.2) 30 : cluster [DBG] 5.0 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:34.743994+0000 osd.2 (osd.2) 31 : cluster [DBG] 5.0 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 679936 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:06.637479+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 679936 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:07.637620+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 679936 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 372520 data_alloc: 218103808 data_used: 252
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:08.637941+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 679936 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:09.638226+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:38.801236+0000 osd.2 (osd.2) 32 : cluster [DBG] 2.1 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:38.811787+0000 osd.2 (osd.2) 33 : cluster [DBG] 2.1 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 33)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:38.801236+0000 osd.2 (osd.2) 32 : cluster [DBG] 2.1 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:38.811787+0000 osd.2 (osd.2) 33 : cluster [DBG] 2.1 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62234624 unmapped: 671744 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:10.638591+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.792082787s of 10.928949356s, submitted: 10
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62242816 unmapped: 663552 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:11.638967+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:40.747450+0000 osd.2 (osd.2) 34 : cluster [DBG] 5.6 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:40.758058+0000 osd.2 (osd.2) 35 : cluster [DBG] 5.6 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 35)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:40.747450+0000 osd.2 (osd.2) 34 : cluster [DBG] 5.6 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:40.758058+0000 osd.2 (osd.2) 35 : cluster [DBG] 5.6 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62242816 unmapped: 663552 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:12.639401+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62242816 unmapped: 663552 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 379753 data_alloc: 218103808 data_used: 252
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:13.639688+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:42.760250+0000 osd.2 (osd.2) 36 : cluster [DBG] 5.e scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:42.770912+0000 osd.2 (osd.2) 37 : cluster [DBG] 5.e scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 647168 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 37)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:42.760250+0000 osd.2 (osd.2) 36 : cluster [DBG] 5.e scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:42.770912+0000 osd.2 (osd.2) 37 : cluster [DBG] 5.e scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:14.639868+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62300160 unmapped: 606208 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:15.640047+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:44.779067+0000 osd.2 (osd.2) 38 : cluster [DBG] 5.d scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:44.789617+0000 osd.2 (osd.2) 39 : cluster [DBG] 5.d scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62300160 unmapped: 606208 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 39)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:44.779067+0000 osd.2 (osd.2) 38 : cluster [DBG] 5.d scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:44.789617+0000 osd.2 (osd.2) 39 : cluster [DBG] 5.d scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:16.640381+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 598016 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:17.640676+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 598016 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 382164 data_alloc: 218103808 data_used: 252
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:18.640926+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 589824 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:19.641255+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 589824 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:20.641614+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.782402992s of 10.014264107s, submitted: 6
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 589824 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:21.641872+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:50.761568+0000 osd.2 (osd.2) 40 : cluster [DBG] 5.1b scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:50.771694+0000 osd.2 (osd.2) 41 : cluster [DBG] 5.1b scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 41)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:50.761568+0000 osd.2 (osd.2) 40 : cluster [DBG] 5.1b scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:50.771694+0000 osd.2 (osd.2) 41 : cluster [DBG] 5.1b scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 581632 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:22.642230+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:40 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 581632 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 386990 data_alloc: 218103808 data_used: 252
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:23.642381+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:52.744315+0000 osd.2 (osd.2) 42 : cluster [DBG] 2.1e scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:52.754832+0000 osd.2 (osd.2) 43 : cluster [DBG] 2.1e scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 540672 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 43)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:52.744315+0000 osd.2 (osd.2) 42 : cluster [DBG] 2.1e scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:52.754832+0000 osd.2 (osd.2) 43 : cluster [DBG] 2.1e scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:24.642685+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:53.727713+0000 osd.2 (osd.2) 44 : cluster [DBG] 2.0 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:53.738254+0000 osd.2 (osd.2) 45 : cluster [DBG] 2.0 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 532480 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 45)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:53.727713+0000 osd.2 (osd.2) 44 : cluster [DBG] 2.0 scrub starts
Jan 29 09:35:40 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:53.738254+0000 osd.2 (osd.2) 45 : cluster [DBG] 2.0 scrub ok
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:25.642986+0000)
Jan 29 09:35:40 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 516096 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:40 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:26.643243+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62357504 unmapped: 548864 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:27.643455+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:56.694657+0000 osd.2 (osd.2) 46 : cluster [DBG] 6.f scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:56.708589+0000 osd.2 (osd.2) 47 : cluster [DBG] 6.f scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62357504 unmapped: 548864 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 394225 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 47)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:56.694657+0000 osd.2 (osd.2) 46 : cluster [DBG] 6.f scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:56.708589+0000 osd.2 (osd.2) 47 : cluster [DBG] 6.f scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:28.643726+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:57.652986+0000 osd.2 (osd.2) 48 : cluster [DBG] 4.1a scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:57.663600+0000 osd.2 (osd.2) 49 : cluster [DBG] 4.1a scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 622592 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 49)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:57.652986+0000 osd.2 (osd.2) 48 : cluster [DBG] 4.1a scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:57.663600+0000 osd.2 (osd.2) 49 : cluster [DBG] 4.1a scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:29.643984+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 622592 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:30.644205+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 622592 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:31.644481+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 614400 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:32.644677+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 614400 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 394225 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:33.644961+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.788371086s of 12.929638863s, submitted: 10
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62300160 unmapped: 1654784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:34.645242+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:03.691462+0000 osd.2 (osd.2) 50 : cluster [DBG] 4.18 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:03.702058+0000 osd.2 (osd.2) 51 : cluster [DBG] 4.18 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 1646592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 51)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:03.691462+0000 osd.2 (osd.2) 50 : cluster [DBG] 4.18 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:03.702058+0000 osd.2 (osd.2) 51 : cluster [DBG] 4.18 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:35.645503+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:04.698786+0000 osd.2 (osd.2) 52 : cluster [DBG] 4.e scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:04.709334+0000 osd.2 (osd.2) 53 : cluster [DBG] 4.e scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 1638400 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 53)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:04.698786+0000 osd.2 (osd.2) 52 : cluster [DBG] 4.e scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:04.709334+0000 osd.2 (osd.2) 53 : cluster [DBG] 4.e scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:36.645765+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 1638400 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:37.645952+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 1638400 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 399049 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:38.646100+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:08.634201+0000 osd.2 (osd.2) 54 : cluster [DBG] 4.1 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:08.644922+0000 osd.2 (osd.2) 55 : cluster [DBG] 4.1 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 1630208 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:39.646305+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 55)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:08.634201+0000 osd.2 (osd.2) 54 : cluster [DBG] 4.1 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:08.644922+0000 osd.2 (osd.2) 55 : cluster [DBG] 4.1 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 1630208 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:40.646485+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 1630208 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:41.646674+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 1622016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:42.646833+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 1622016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 401460 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:43.646960+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62341120 unmapped: 1613824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:44.647169+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62341120 unmapped: 1613824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:45.647363+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62341120 unmapped: 1613824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:46.647594+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62349312 unmapped: 1605632 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:47.647734+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62349312 unmapped: 1605632 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 401460 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:48.648071+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62357504 unmapped: 1597440 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:49.648260+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62357504 unmapped: 1597440 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.788869858s of 16.955900192s, submitted: 6
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:50.648392+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 1 last_log 56 sent 55 num 1 unsent 1 sending 1
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:20.647273+0000 osd.2 (osd.2) 56 : cluster [DBG] 4.a scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 56)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:20.647273+0000 osd.2 (osd.2) 56 : cluster [DBG] 4.a scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 1589248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:51.648677+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 1 last_log 57 sent 56 num 1 unsent 1 sending 1
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:20.657986+0000 osd.2 (osd.2) 57 : cluster [DBG] 4.a scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 57)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:20.657986+0000 osd.2 (osd.2) 57 : cluster [DBG] 4.a scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 1581056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:52.649736+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 1581056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 403871 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:53.649988+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 1564672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:54.650218+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 1564672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:55.650398+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62398464 unmapped: 1556480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:56.650613+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62398464 unmapped: 1556480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:57.650759+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:27.594927+0000 osd.2 (osd.2) 58 : cluster [DBG] 6.8 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:27.605763+0000 osd.2 (osd.2) 59 : cluster [DBG] 6.8 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 59)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:27.594927+0000 osd.2 (osd.2) 58 : cluster [DBG] 6.8 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:27.605763+0000 osd.2 (osd.2) 59 : cluster [DBG] 6.8 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62406656 unmapped: 1548288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 406282 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:58.651059+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 1540096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:59.651239+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 1540096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:00.651401+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:29.662755+0000 osd.2 (osd.2) 60 : cluster [DBG] 6.15 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:29.680505+0000 osd.2 (osd.2) 61 : cluster [DBG] 6.15 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 61)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:29.662755+0000 osd.2 (osd.2) 60 : cluster [DBG] 6.15 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:29.680505+0000 osd.2 (osd.2) 61 : cluster [DBG] 6.15 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 1540096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.951041222s of 10.965127945s, submitted: 6
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:01.651632+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:31.611850+0000 osd.2 (osd.2) 62 : cluster [DBG] 6.14 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:31.636611+0000 osd.2 (osd.2) 63 : cluster [DBG] 6.14 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 1515520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 63)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:31.611850+0000 osd.2 (osd.2) 62 : cluster [DBG] 6.14 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:31.636611+0000 osd.2 (osd.2) 63 : cluster [DBG] 6.14 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:02.651976+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:32.582089+0000 osd.2 (osd.2) 64 : cluster [DBG] 4.13 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:32.592671+0000 osd.2 (osd.2) 65 : cluster [DBG] 4.13 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 1515520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 413521 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 65)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:32.582089+0000 osd.2 (osd.2) 64 : cluster [DBG] 4.13 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:32.592671+0000 osd.2 (osd.2) 65 : cluster [DBG] 4.13 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:03.652275+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:33.558918+0000 osd.2 (osd.2) 66 : cluster [DBG] 4.11 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:33.569375+0000 osd.2 (osd.2) 67 : cluster [DBG] 4.11 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 1490944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:04.652497+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 67)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:33.558918+0000 osd.2 (osd.2) 66 : cluster [DBG] 4.11 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:33.569375+0000 osd.2 (osd.2) 67 : cluster [DBG] 4.11 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 1490944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:05.652721+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:35.525904+0000 osd.2 (osd.2) 68 : cluster [DBG] 4.1c scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:35.536484+0000 osd.2 (osd.2) 69 : cluster [DBG] 4.1c scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 69)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:35.525904+0000 osd.2 (osd.2) 68 : cluster [DBG] 4.1c scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:35.536484+0000 osd.2 (osd.2) 69 : cluster [DBG] 4.1c scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 1474560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:06.652987+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 1474560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:07.653264+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 1474560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 418347 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:08.653591+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62488576 unmapped: 1466368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:09.653909+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62488576 unmapped: 1466368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:10.654091+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:40.554658+0000 osd.2 (osd.2) 70 : cluster [DBG] 6.11 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:40.565208+0000 osd.2 (osd.2) 71 : cluster [DBG] 6.11 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62504960 unmapped: 1449984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 71)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:40.554658+0000 osd.2 (osd.2) 70 : cluster [DBG] 6.11 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:40.565208+0000 osd.2 (osd.2) 71 : cluster [DBG] 6.11 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:11.654527+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:41.566788+0000 osd.2 (osd.2) 72 : cluster [DBG] 6.13 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:41.584440+0000 osd.2 (osd.2) 73 : cluster [DBG] 6.13 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.617183685s of 10.058506966s, submitted: 11
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62529536 unmapped: 1425408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 73)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:41.566788+0000 osd.2 (osd.2) 72 : cluster [DBG] 6.13 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:41.584440+0000 osd.2 (osd.2) 73 : cluster [DBG] 6.13 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:12.654832+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:42.528591+0000 osd.2 (osd.2) 74 : cluster [DBG] 6.1f scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:42.546260+0000 osd.2 (osd.2) 75 : cluster [DBG] 6.1f scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62529536 unmapped: 1425408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 425586 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 75)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:42.528591+0000 osd.2 (osd.2) 74 : cluster [DBG] 6.1f scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:42.546260+0000 osd.2 (osd.2) 75 : cluster [DBG] 6.1f scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:13.655071+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 1417216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:14.655307+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 1417216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:15.655494+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 1409024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:16.655715+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 1409024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:17.655996+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:47.576186+0000 osd.2 (osd.2) 76 : cluster [DBG] 3.18 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:47.586756+0000 osd.2 (osd.2) 77 : cluster [DBG] 3.18 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 77)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:47.576186+0000 osd.2 (osd.2) 76 : cluster [DBG] 3.18 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:47.586756+0000 osd.2 (osd.2) 77 : cluster [DBG] 3.18 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 1409024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 427999 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:18.656259+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 1400832 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:19.656447+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:49.549653+0000 osd.2 (osd.2) 78 : cluster [DBG] 3.16 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:49.560215+0000 osd.2 (osd.2) 79 : cluster [DBG] 3.16 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 79)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:49.549653+0000 osd.2 (osd.2) 78 : cluster [DBG] 3.16 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:49.560215+0000 osd.2 (osd.2) 79 : cluster [DBG] 3.16 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 1392640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:20.656714+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:50.504847+0000 osd.2 (osd.2) 80 : cluster [DBG] 7.11 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:50.515417+0000 osd.2 (osd.2) 81 : cluster [DBG] 7.11 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 81)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:50.504847+0000 osd.2 (osd.2) 80 : cluster [DBG] 7.11 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:50.515417+0000 osd.2 (osd.2) 81 : cluster [DBG] 7.11 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 1368064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:21.656953+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 1368064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:22.657189+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 1368064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432825 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:23.657381+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 1335296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.624273300s of 12.843159676s, submitted: 9
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:24.657623+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:54.514117+0000 osd.2 (osd.2) 82 : cluster [DBG] 4.1b scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:54.524653+0000 osd.2 (osd.2) 83 : cluster [DBG] 4.1b scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 83)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:54.514117+0000 osd.2 (osd.2) 82 : cluster [DBG] 4.1b scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:54.524653+0000 osd.2 (osd.2) 83 : cluster [DBG] 4.1b scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 1327104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:25.657899+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:55.480797+0000 osd.2 (osd.2) 84 : cluster [DBG] 7.15 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:55.491341+0000 osd.2 (osd.2) 85 : cluster [DBG] 7.15 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 85)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:55.480797+0000 osd.2 (osd.2) 84 : cluster [DBG] 7.15 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:55.491341+0000 osd.2 (osd.2) 85 : cluster [DBG] 7.15 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 1318912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:26.658152+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:56.521936+0000 osd.2 (osd.2) 86 : cluster [DBG] 3.11 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:56.532509+0000 osd.2 (osd.2) 87 : cluster [DBG] 3.11 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 87)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:56.521936+0000 osd.2 (osd.2) 86 : cluster [DBG] 3.11 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:56.532509+0000 osd.2 (osd.2) 87 : cluster [DBG] 3.11 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 1318912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:27.658544+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 1318912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 440064 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:28.658946+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 1310720 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:29.659650+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 1310720 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:30.660202+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62652416 unmapped: 1302528 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:31.660859+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62652416 unmapped: 1302528 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:32.661291+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:02.479396+0000 osd.2 (osd.2) 88 : cluster [DBG] 7.a scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:02.490070+0000 osd.2 (osd.2) 89 : cluster [DBG] 7.a scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 89)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:02.479396+0000 osd.2 (osd.2) 88 : cluster [DBG] 7.a scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:02.490070+0000 osd.2 (osd.2) 89 : cluster [DBG] 7.a scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 442475 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 1286144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:33.661907+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 1286144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:34.662254+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 1286144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:35.662406+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 1277952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:36.662567+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 1277952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.825295448s of 13.027392387s, submitted: 8
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:37.662796+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:07.541667+0000 osd.2 (osd.2) 90 : cluster [DBG] 7.1c scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:07.552248+0000 osd.2 (osd.2) 91 : cluster [DBG] 7.1c scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 91)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:07.541667+0000 osd.2 (osd.2) 90 : cluster [DBG] 7.1c scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:07.552248+0000 osd.2 (osd.2) 91 : cluster [DBG] 7.1c scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 444888 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 1269760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:38.663115+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 1269760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:39.663482+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:09.511755+0000 osd.2 (osd.2) 92 : cluster [DBG] 3.e scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:09.522381+0000 osd.2 (osd.2) 93 : cluster [DBG] 3.e scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 93)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:09.511755+0000 osd.2 (osd.2) 92 : cluster [DBG] 3.e scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:09.522381+0000 osd.2 (osd.2) 93 : cluster [DBG] 3.e scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 1253376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:40.663762+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 1253376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:41.664050+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 1245184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:42.664550+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:12.509997+0000 osd.2 (osd.2) 94 : cluster [DBG] 7.8 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:12.520583+0000 osd.2 (osd.2) 95 : cluster [DBG] 7.8 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 95)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:12.509997+0000 osd.2 (osd.2) 94 : cluster [DBG] 7.8 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:12.520583+0000 osd.2 (osd.2) 95 : cluster [DBG] 7.8 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 449710 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62726144 unmapped: 1228800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:43.664799+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 1204224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:44.664967+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 1204224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:45.665219+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 1171456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:46.665375+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:16.425566+0000 osd.2 (osd.2) 96 : cluster [DBG] 7.2 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:16.436181+0000 osd.2 (osd.2) 97 : cluster [DBG] 7.2 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 97)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:16.425566+0000 osd.2 (osd.2) 96 : cluster [DBG] 7.2 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:16.436181+0000 osd.2 (osd.2) 97 : cluster [DBG] 7.2 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 1171456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:47.665674+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:17.386841+0000 osd.2 (osd.2) 98 : cluster [DBG] 7.1 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:17.397435+0000 osd.2 (osd.2) 99 : cluster [DBG] 7.1 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 99)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:17.386841+0000 osd.2 (osd.2) 98 : cluster [DBG] 7.1 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:17.397435+0000 osd.2 (osd.2) 99 : cluster [DBG] 7.1 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454532 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1163264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:48.665854+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1163264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:49.666036+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1163264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:50.666190+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.886688232s of 13.916419983s, submitted: 10
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1163264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:51.666356+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:21.457997+0000 osd.2 (osd.2) 100 : cluster [DBG] 3.7 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:21.468603+0000 osd.2 (osd.2) 101 : cluster [DBG] 3.7 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 101)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:21.457997+0000 osd.2 (osd.2) 100 : cluster [DBG] 3.7 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:21.468603+0000 osd.2 (osd.2) 101 : cluster [DBG] 3.7 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1163264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:52.666663+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 459354 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1146880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:53.666851+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:23.451643+0000 osd.2 (osd.2) 102 : cluster [DBG] 7.5 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:23.462269+0000 osd.2 (osd.2) 103 : cluster [DBG] 7.5 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 103)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:23.451643+0000 osd.2 (osd.2) 102 : cluster [DBG] 7.5 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:23.462269+0000 osd.2 (osd.2) 103 : cluster [DBG] 7.5 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1146880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:54.667109+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:24.449104+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.c scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:24.459710+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.c scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 105)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:24.449104+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.c scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:24.459710+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.c scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1138688 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:55.667441+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:25.453408+0000 osd.2 (osd.2) 106 : cluster [DBG] 7.1a scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:25.463988+0000 osd.2 (osd.2) 107 : cluster [DBG] 7.1a scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 107)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:25.453408+0000 osd.2 (osd.2) 106 : cluster [DBG] 7.1a scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:25.463988+0000 osd.2 (osd.2) 107 : cluster [DBG] 7.1a scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 1130496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:56.667897+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 1130496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:57.668120+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 464178 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 1122304 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:58.668364+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 1130496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:59.668606+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:29.517001+0000 osd.2 (osd.2) 108 : cluster [DBG] 7.e scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:29.527585+0000 osd.2 (osd.2) 109 : cluster [DBG] 7.e scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 109)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:29.517001+0000 osd.2 (osd.2) 108 : cluster [DBG] 7.e scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:29.527585+0000 osd.2 (osd.2) 109 : cluster [DBG] 7.e scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 1130496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:00.669369+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 1122304 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:01.669581+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.979720116s of 11.018548965s, submitted: 10
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 1114112 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:02.669875+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:32.476637+0000 osd.2 (osd.2) 110 : cluster [DBG] 3.1d scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:32.487068+0000 osd.2 (osd.2) 111 : cluster [DBG] 3.1d scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 111)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:32.476637+0000 osd.2 (osd.2) 110 : cluster [DBG] 3.1d scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:32.487068+0000 osd.2 (osd.2) 111 : cluster [DBG] 3.1d scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 469002 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 1105920 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:03.670419+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 1081344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:04.670619+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 1081344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:05.671116+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:35.448290+0000 osd.2 (osd.2) 112 : cluster [DBG] 3.1e scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:35.458773+0000 osd.2 (osd.2) 113 : cluster [DBG] 3.1e scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 113)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:35.448290+0000 osd.2 (osd.2) 112 : cluster [DBG] 3.1e scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:35.458773+0000 osd.2 (osd.2) 113 : cluster [DBG] 3.1e scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 1073152 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:06.671541+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 1064960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:07.672052+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:37.460773+0000 osd.2 (osd.2) 114 : cluster [DBG] 3.5 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:37.471263+0000 osd.2 (osd.2) 115 : cluster [DBG] 3.5 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 115)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:37.460773+0000 osd.2 (osd.2) 114 : cluster [DBG] 3.5 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:37.471263+0000 osd.2 (osd.2) 115 : cluster [DBG] 3.5 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473826 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 1056768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:08.672419+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 1048576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:09.672756+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:39.443892+0000 osd.2 (osd.2) 116 : cluster [DBG] 3.8 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:39.454490+0000 osd.2 (osd.2) 117 : cluster [DBG] 3.8 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 117)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:39.443892+0000 osd.2 (osd.2) 116 : cluster [DBG] 3.8 scrub starts
Jan 29 09:35:41 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:39.454490+0000 osd.2 (osd.2) 117 : cluster [DBG] 3.8 scrub ok
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:10.673063+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:11.673290+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:12.673515+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1032192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:13.673672+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1032192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:14.673852+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:15.674018+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:16.674194+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:17.674474+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1015808 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:18.674773+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1007616 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:19.675047+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1007616 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:20.675340+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 999424 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:21.675522+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 999424 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:22.675663+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 991232 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:23.675796+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:24.676038+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:25.676287+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:26.676526+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 958464 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:27.676724+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 958464 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:28.676901+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:29.677057+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:30.677273+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:31.677554+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 942080 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:32.677724+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 942080 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:33.677950+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 942080 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:34.678215+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 933888 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:35.678485+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 933888 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:36.678730+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 925696 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:37.678880+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 925696 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:38.679044+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 909312 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:39.679260+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 909312 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:40.679465+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 909312 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:41.679709+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 901120 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:42.680079+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 892928 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:43.680244+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 884736 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:44.680447+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 868352 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:45.680586+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 868352 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:46.680767+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 860160 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:47.680944+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 860160 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:48.681097+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 860160 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:49.681209+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 851968 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:50.681365+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 851968 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:51.681678+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 843776 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:52.682314+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 843776 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:53.682633+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 843776 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:54.682901+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 819200 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:55.683079+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 819200 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:56.683228+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:57.683660+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:58.683813+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:59.684030+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 802816 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:00.684214+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 802816 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:01.684420+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:02.684561+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:03.684709+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 786432 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:04.684883+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 761856 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:05.685325+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 761856 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:06.685585+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:07.685757+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:08.686210+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:09.686649+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:10.687037+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:11.687314+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 729088 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:12.687456+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 729088 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:13.687618+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:14.687784+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 770048 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:15.687943+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 770048 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:16.688263+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 761856 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:17.688461+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 761856 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:18.688765+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:19.689046+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:20.689336+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:21.690340+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:22.690613+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:23.690835+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:24.691070+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:25.691254+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:26.691456+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:27.691724+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:28.691992+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:29.692268+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 696320 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:30.692741+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 696320 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:31.693116+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:32.693407+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:33.693777+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 679936 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:34.694060+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:35.694252+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:36.694516+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:37.694827+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:38.695030+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:39.695238+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:40.695381+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:41.695631+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:42.696017+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:43.696177+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:44.696357+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:45.696562+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:46.696697+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:47.696850+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:48.697019+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:49.697201+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 614400 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:50.697427+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 589824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:51.697673+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 589824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:52.697832+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 581632 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:53.697976+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 565248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:54.698177+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 557056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:55.698337+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 557056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:56.698488+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 557056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:57.698704+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:58.698930+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:59.699119+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:00.699299+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:01.699462+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:02.699593+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:03.699723+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:04.699851+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:05.700004+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:06.700204+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:07.700429+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 524288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:08.700628+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 524288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:09.700743+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 524288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:10.700901+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:11.701435+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:12.701613+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 507904 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:13.701755+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 507904 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:14.701881+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 499712 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:15.702008+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 499712 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:16.702145+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 499712 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:17.702296+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:18.702447+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:19.702647+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:20.702843+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 483328 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:21.703198+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 483328 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:22.703366+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 475136 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:23.703571+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:24.703800+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:25.704016+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:26.704269+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:27.704428+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:28.704627+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:29.704814+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:30.704975+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 442368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:31.705193+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 442368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:32.705412+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 442368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:33.705707+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:34.705975+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:35.706247+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:36.706398+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:37.706593+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 417792 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:38.706804+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 417792 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:39.707020+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 417792 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:40.707251+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:41.707491+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:42.707704+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:43.707877+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:44.708182+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:45.708392+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 385024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:46.708598+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 385024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:47.708769+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 376832 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:48.708932+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 376832 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:49.709177+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:50.709308+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:51.709504+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:52.709679+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:53.709917+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:54.710084+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:55.710258+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:56.710512+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:57.710750+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:58.710906+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:59.711087+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:00.711267+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 335872 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:01.711508+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 335872 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:02.711716+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:03.711985+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:04.712194+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:05.712388+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:06.712616+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:07.712805+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:08.712969+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:09.713196+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:10.713371+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:11.713575+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:12.713758+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:13.714006+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 286720 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:14.714200+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 286720 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:15.714369+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 278528 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:16.714553+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 278528 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:17.714732+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 278528 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:18.715011+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:19.715166+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:20.715332+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 262144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:21.715622+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 262144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:22.715853+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:23.715991+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:24.716240+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:25.716416+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:26.716573+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 229376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:27.716788+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 229376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:28.717052+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:29.717204+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:30.717375+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 212992 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:31.717634+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 212992 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:32.717892+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:33.718043+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:34.718245+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:35.718475+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:36.718669+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:37.718825+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:38.719008+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 188416 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:39.719248+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 188416 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:40.719425+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 188416 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:41.719627+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 180224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:42.719754+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 180224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:43.719914+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 172032 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:44.720088+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 172032 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:45.720262+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 172032 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:46.720471+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:47.720694+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:48.720839+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:49.721043+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 155648 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:50.721236+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 155648 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:51.721428+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:52.721570+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:53.721731+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:54.721868+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:55.722056+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:56.722268+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 131072 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:57.722432+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 131072 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:58.722677+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 131072 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:59.722904+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:00.723103+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:01.723412+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:02.723669+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 114688 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:03.723809+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:04.723963+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 114688 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:05.724434+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 114688 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:06.724616+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 106496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:07.725348+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 106496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:08.725489+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 106496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:09.725654+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 106496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:10.725813+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 98304 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:11.726051+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 98304 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:12.726227+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 90112 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:13.726410+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 90112 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:14.726599+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 81920 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:15.726770+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 81920 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:16.726920+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 81920 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:17.727088+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 73728 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:18.727241+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 73728 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:19.727458+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 65536 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:20.727691+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 65536 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:21.727970+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 65536 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:22.728168+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 57344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:23.728395+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 57344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:24.728688+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 57344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:25.728883+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 49152 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:26.729073+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 49152 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:27.729203+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:28.729341+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:29.729486+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:30.729607+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:31.729835+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:32.729979+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:33.730194+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:34.730368+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:35.730603+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:36.730771+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:37.730972+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:38.731215+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:39.731466+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:40.731727+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:41.732058+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:42.732272+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:43.732501+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:44.732746+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:45.732956+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:46.733234+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:47.733386+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:48.733552+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:49.733724+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:50.733889+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:51.734088+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:52.734244+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:53.734441+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:54.734624+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:55.734788+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:56.734975+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:57.735188+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:58.735310+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:59.735403+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:00.735568+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:01.735791+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:02.735948+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:03.736094+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:04.736228+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:05.736429+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:06.736622+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:07.736793+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:08.736938+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:09.737096+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:10.737262+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:11.737445+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:12.737598+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:13.737789+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:14.737985+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:15.738350+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:16.738511+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:17.738706+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:18.738901+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:19.739059+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:20.739205+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 925696 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:21.739384+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 925696 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:22.739570+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:23.739717+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:24.739876+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:25.740051+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:26.740187+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:27.740304+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:28.740490+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:29.740610+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:30.740764+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:31.740976+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:32.741235+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:33.741397+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:34.741581+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:35.741719+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:36.741919+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:37.742157+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:38.742448+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:39.742643+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:40.742871+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:41.743093+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:42.743282+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:43.743448+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:44.743654+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:45.743815+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:46.744056+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:47.744241+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:48.744419+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:49.744572+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:50.744738+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:51.744913+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:52.745056+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:53.745207+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:54.745355+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:55.745502+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:56.745636+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:57.745919+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:58.746081+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:59.746441+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:00.746588+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:01.747352+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:02.747619+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:03.747770+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:04.747964+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:05.748166+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:06.748357+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:07.748510+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:08.748668+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:09.748819+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:10.749034+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:11.749202+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:12.749367+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:13.749512+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:14.749667+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:15.749884+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:16.750086+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:17.750236+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:18.750413+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:19.750655+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:20.750819+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:21.751086+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:22.751389+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:23.751587+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:24.751825+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:25.751983+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:26.752228+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:27.752379+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:28.752557+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:29.752802+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:30.752947+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:31.753163+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:32.753374+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:33.753596+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:34.754336+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:35.754473+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:36.754676+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Cumulative writes: 4171 writes, 19K keys, 4171 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4171 writes, 357 syncs, 11.68 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4171 writes, 19K keys, 4171 commit groups, 1.0 writes per commit group, ingest: 15.88 MB, 0.03 MB/s
                                           Interval WAL: 4171 writes, 357 syncs, 11.68 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:37.754831+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 598016 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:38.755036+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:39.755199+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:40.755445+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64421888 unmapped: 581632 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:41.755730+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64421888 unmapped: 581632 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:42.755938+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:43.756242+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:44.756439+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:45.756608+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64438272 unmapped: 565248 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:46.756812+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64438272 unmapped: 565248 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:47.756989+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 557056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:48.757215+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 557056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:49.757422+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 548864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:50.757589+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 548864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:51.758004+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 548864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:52.758163+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 540672 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:53.758329+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 540672 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:54.758517+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 540672 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:55.758664+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 532480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:56.758844+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 532480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:57.759004+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 532480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:58.759168+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 524288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:59.759298+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 524288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:00.759458+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:01.759641+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:02.759791+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:03.759934+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:04.760075+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:05.760197+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:06.760336+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:07.760481+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:08.760653+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:09.760846+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:10.761009+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:11.761217+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:12.761423+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:13.761619+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 475136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:14.761767+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 475136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:15.761913+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:16.762066+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:17.762201+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:18.762335+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:19.762469+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:20.762647+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 450560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:21.762869+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 450560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:22.762990+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 450560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:23.763158+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 442368 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:24.763323+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 442368 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:25.763663+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:26.763840+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:27.764022+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:28.764206+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:29.764323+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 425984 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:30.764445+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 425984 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:31.764587+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 417792 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:32.764712+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 417792 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:33.764827+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:34.764964+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:35.765113+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:36.765269+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 401408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:37.765443+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 401408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:38.765618+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 393216 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:39.765779+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 393216 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:40.765940+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 393216 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:41.766156+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:42.766308+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:43.767243+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:44.767387+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:45.767556+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 368640 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:46.767713+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 368640 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:47.767873+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 368640 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:48.768025+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:49.768195+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:50.768299+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:51.768441+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 352256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:52.768569+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 352256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:53.768708+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:54.768861+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:55.769019+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:56.769411+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:57.769611+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:58.769757+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:59.769921+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:00.770117+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:01.770342+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 319488 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:02.770468+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 319488 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:03.770627+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 319488 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:04.770787+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 311296 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:05.770973+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 311296 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:06.771205+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 303104 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:07.771426+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 303104 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:08.771659+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 303104 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:09.771840+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 294912 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:10.772026+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 294912 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:11.772239+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 286720 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:12.772435+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 286720 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:13.772677+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 286720 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:14.772875+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 278528 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:15.773070+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 278528 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:16.773225+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 278528 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:17.773418+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 270336 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:18.773565+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 270336 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:19.773722+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 262144 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:20.773900+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 262144 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:21.774160+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 253952 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:22.774378+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 253952 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:23.774624+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 253952 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:24.774808+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 245760 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:25.774964+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 245760 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:26.775154+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 245760 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:27.775303+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64765952 unmapped: 237568 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:28.775423+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64765952 unmapped: 237568 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:29.775574+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 229376 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:30.775717+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 229376 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:31.775872+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 229376 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:32.776032+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64782336 unmapped: 221184 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:33.776238+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64782336 unmapped: 221184 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:34.776395+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 212992 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:35.776521+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 212992 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:36.776646+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:37.776813+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:38.776975+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:39.777180+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:40.777391+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:41.777609+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:42.777739+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:43.777890+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:44.778039+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:45.778199+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:46.778395+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:47.778568+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:48.778738+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:49.778895+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:50.779235+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:51.779414+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:52.779692+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:53.779845+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:54.780053+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:55.780250+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:56.780450+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:57.780670+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:58.780898+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:59.781170+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:00.781334+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:01.781518+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:02.781675+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:03.781943+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:04.782103+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:05.782289+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:06.782485+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:07.782645+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:08.782812+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:09.782950+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:10.783160+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:11.783368+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:12.783562+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:13.783735+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:14.783855+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:15.784046+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:16.784196+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:17.784349+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:18.784524+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:19.784710+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:20.784914+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:21.785097+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:22.785200+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:23.785334+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:24.785483+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:25.785640+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:26.785847+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:27.786025+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:28.786255+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:29.786424+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:30.786621+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:31.786842+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:32.787051+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:33.787257+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:34.787466+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:35.787631+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:36.787794+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:37.787982+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:38.788299+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:39.788567+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:40.789330+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:41.789645+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:42.790081+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:43.790693+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:44.790848+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:45.791002+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:46.791174+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:47.791346+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:48.791544+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:49.791732+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:50.791937+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:51.792101+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:52.792330+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:53.792510+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:54.792636+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:55.792815+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:56.793003+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:57.793168+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:58.793341+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:59.793476+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:00.793637+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:01.793810+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:02.793994+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:03.794194+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:04.794330+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:05.794480+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:06.794626+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:07.794762+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:08.794938+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:09.795116+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:10.795326+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:11.795519+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:12.795675+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:13.795850+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:14.796094+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:15.796195+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:16.796376+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:17.796552+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:18.796694+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:19.796833+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:20.797003+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:21.797250+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:22.797385+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:23.797528+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:24.797704+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:25.797891+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:26.798033+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:27.798203+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:28.798441+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:29.798572+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:30.798762+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:31.798969+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:32.799188+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:33.799315+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:34.799466+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:35.799665+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:36.799830+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:37.800037+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:38.800247+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:39.800429+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:40.800566+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:41.800769+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:42.800966+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:43.801123+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:44.801340+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:45.801568+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:46.801770+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:47.801945+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:48.802194+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:49.802424+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:50.802616+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:51.802906+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:52.803084+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:53.803220+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:54.803461+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:55.803716+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:56.804197+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:57.804413+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:58.804747+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:59.805037+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:00.805223+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:01.805456+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:02.805663+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:03.805842+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:04.806029+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:05.806227+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:06.806443+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:07.806595+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:08.806801+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:09.807011+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:10.807240+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:11.807433+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:12.807586+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:13.807769+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:14.807958+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:15.808161+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:16.808310+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:17.808685+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:18.808880+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:19.809565+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:20.809733+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:21.810104+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:22.810842+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:23.811305+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:24.811462+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:25.812543+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:26.813247+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:27.813460+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:28.813610+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:29.813783+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:30.814295+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:31.814633+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:32.814793+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:33.814942+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:34.815100+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:35.815217+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:36.815385+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:37.815682+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:38.815849+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:39.816067+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:40.816324+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:41.816638+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:42.816914+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:43.817059+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:44.817219+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:45.817358+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:46.817556+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:47.817964+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:48.818111+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: mgrc ms_handle_reset ms_handle_reset con 0x55bdea7d6000
Jan 29 09:35:41 compute-0 ceph-osd[88193]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1795618739
Jan 29 09:35:41 compute-0 ceph-osd[88193]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1795618739,v1:192.168.122.100:6801/1795618739]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: get_auth_request con 0x55bdeb2a2000 auth_method 0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: mgrc handle_mgr_configure stats_period=5
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:49.818281+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:50.818421+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:51.818585+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:52.818779+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:53.818969+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:54.819122+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:55.819315+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:56.819505+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:57.819691+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:58.819797+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:59.819951+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:00.820084+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:01.820275+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:02.820414+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:03.820561+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:04.821240+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:05.821875+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:06.821991+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:07.822119+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:08.822244+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:09.822394+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:10.822557+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:11.822766+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:12.822940+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:13.823062+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:14.823208+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:15.823377+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:16.823626+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:17.823822+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:18.823991+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:19.824263+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:20.824432+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:21.824630+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:22.824771+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:23.825175+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:24.825329+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:25.825476+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:26.825692+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:27.825869+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:28.826021+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:29.826238+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:30.826421+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:31.826602+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:32.826785+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:33.827026+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:34.827179+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:35.827312+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:36.827521+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:37.827691+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:38.827869+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:39.828048+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:40.828195+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:41.828346+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:42.828480+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:43.828706+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:44.828922+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:45.829078+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:46.829295+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:47.829499+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:48.829698+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:49.829932+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:50.830124+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:51.830440+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:52.830622+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:53.830908+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:54.831086+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:55.831246+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:56.831577+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:57.831834+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:58.832046+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:59.832289+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:00.832442+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:01.832685+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:02.832893+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:03.833102+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:04.833306+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:05.833511+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:06.833714+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:07.833876+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:08.834112+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:09.834346+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:10.834567+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:11.834761+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:12.834917+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:13.835092+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:14.835259+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:15.835418+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:16.835566+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:17.835760+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:18.836012+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:19.836414+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:20.836571+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:21.836763+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:22.836913+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:23.837086+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:24.837224+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:25.837408+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:26.837510+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:27.837622+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:28.837769+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:29.837882+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:30.838006+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:31.838184+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:32.838319+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:33.838415+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:34.838531+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:35.838646+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:36.838796+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:37.838986+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:38.839128+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:39.839297+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:40.839426+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:41.839624+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:42.839774+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:43.839986+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:44.840125+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:45.840292+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:46.840521+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:47.840663+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:48.840806+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:49.840957+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:50.841212+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:51.841438+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:52.841599+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:53.841809+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:54.842009+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:55.842216+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:56.842410+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:57.842558+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:58.842734+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:59.842878+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:00.843074+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:01.843237+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:02.843376+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:03.843704+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:04.843882+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:05.844091+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:06.844262+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:07.844470+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:08.844652+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:09.844864+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:10.845042+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:11.845298+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:12.845465+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:13.845635+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:14.845826+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:15.845971+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:16.846090+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:17.846210+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:18.846343+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:19.846603+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:20.846844+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:21.847173+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:22.847372+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:23.847525+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:24.847778+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:25.847958+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:26.848236+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:27.848454+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:28.848613+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:29.848787+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:30.849314+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:31.849701+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:32.849947+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:33.850173+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:34.850363+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:35.850653+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:36.850928+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:37.851198+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:38.851381+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:39.851561+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:40.851731+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:41.851915+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:42.852051+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:43.852265+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:44.852446+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:45.852655+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:46.852845+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:47.853005+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:48.853203+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:49.853383+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:50.853559+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:51.853750+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:52.853890+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:53.854027+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:54.854206+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:55.854421+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:56.854594+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:57.854791+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:58.854917+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:59.855117+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:00.855338+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:01.855526+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:02.855730+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:03.855902+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:04.856079+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:05.856248+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:06.856441+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:07.856602+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:08.856789+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:09.856935+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:10.857115+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:11.857381+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:12.857520+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:13.857769+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:14.857941+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:15.858120+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:16.858322+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:17.858476+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:18.858629+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:19.858792+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:20.858966+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:21.859076+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:22.859233+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:23.861213+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:24.861403+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 29 09:35:41 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3344533127' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:25.861571+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:26.861703+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:27.861835+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:28.861979+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:29.862126+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:30.862364+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:31.862638+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:32.862870+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:33.863048+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:34.863249+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:35.863436+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:36.863614+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:37.863767+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:38.863991+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:39.864202+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:40.864384+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:41.864561+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:42.864735+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:43.864923+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:44.865190+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:45.865416+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:46.865621+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:47.865808+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:48.865978+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:49.866171+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:50.866305+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:51.866501+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:52.866669+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:53.866824+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:54.866988+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:55.867256+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:56.867482+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:57.867661+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:58.867854+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:59.868027+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:00.868218+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:01.868400+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:02.868527+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:03.868816+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:04.868988+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:05.869164+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:06.869290+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:07.869385+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:08.869516+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 1032192 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 46 handle_osd_map epochs [47,47], i have 46, src has [1,47]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 967.346862793s of 967.389648438s, submitted: 8
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:09.869660+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 47 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 1032192 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:10.869736+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 47 handle_osd_map epochs [48,48], i have 47, src has [1,48]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 17571840 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:11.869885+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 48 handle_osd_map epochs [49,49], i have 48, src has [1,49]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 49 ms_handle_reset con 0x55bdeb712400 session 0x55bdeb7be700
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 534206 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 17539072 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:12.869999+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712c00
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 17235968 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:13.870108+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 49 ms_handle_reset con 0x55bdeb712c00 session 0x55bdeac19340
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 49 handle_osd_map epochs [50,50], i have 49, src has [1,50]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 17186816 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:14.870194+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 17186816 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:15.870332+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 50 heartbeat osd_stat(store_statfs(0x4fcccc000/0x0/0x4ffc00000, data 0x14a43a0/0x14fa000, compress 0x0/0x0/0x0, omap 0x7190, meta 0x1a28e70), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 17186816 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:16.870504+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 602690 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 17186816 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 50 handle_osd_map epochs [51,51], i have 50, src has [1,51]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:17.870656+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 17203200 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:18.870831+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 17211392 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:19.870971+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:20.871095+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 51 heartbeat osd_stat(store_statfs(0x4fcccd000/0x0/0x4ffc00000, data 0x14a5850/0x14fd000, compress 0x0/0x0/0x0, omap 0x7465, meta 0x1a28b9b), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 51 heartbeat osd_stat(store_statfs(0x4fcccd000/0x0/0x4ffc00000, data 0x14a5850/0x14fd000, compress 0x0/0x0/0x0, omap 0x7465, meta 0x1a28b9b), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:21.871290+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 603942 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:22.871437+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:23.871591+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:24.871749+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:25.871865+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 51 heartbeat osd_stat(store_statfs(0x4fcccd000/0x0/0x4ffc00000, data 0x14a5850/0x14fd000, compress 0x0/0x0/0x0, omap 0x7465, meta 0x1a28b9b), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:26.871994+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 603942 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 51 heartbeat osd_stat(store_statfs(0x4fcccd000/0x0/0x4ffc00000, data 0x14a5850/0x14fd000, compress 0x0/0x0/0x0, omap 0x7465, meta 0x1a28b9b), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:27.872167+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:28.872297+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:29.872445+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:30.872728+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 51 heartbeat osd_stat(store_statfs(0x4fcccd000/0x0/0x4ffc00000, data 0x14a5850/0x14fd000, compress 0x0/0x0/0x0, omap 0x7465, meta 0x1a28b9b), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:31.872964+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 603942 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:32.873183+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 51 heartbeat osd_stat(store_statfs(0x4fcccd000/0x0/0x4ffc00000, data 0x14a5850/0x14fd000, compress 0x0/0x0/0x0, omap 0x7465, meta 0x1a28b9b), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:33.873338+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:34.873497+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:35.873664+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 51 heartbeat osd_stat(store_statfs(0x4fcccd000/0x0/0x4ffc00000, data 0x14a5850/0x14fd000, compress 0x0/0x0/0x0, omap 0x7465, meta 0x1a28b9b), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:36.873787+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 603942 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Cumulative writes: 4249 writes, 19K keys, 4249 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4249 writes, 388 syncs, 10.95 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 78 writes, 338 keys, 78 commit groups, 1.0 writes per commit group, ingest: 0.24 MB, 0.00 MB/s
                                           Interval WAL: 78 writes, 31 syncs, 2.52 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 17162240 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:37.873948+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 17162240 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:38.874095+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 51 heartbeat osd_stat(store_statfs(0x4fcccd000/0x0/0x4ffc00000, data 0x14a5850/0x14fd000, compress 0x0/0x0/0x0, omap 0x7465, meta 0x1a28b9b), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 17162240 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:39.874228+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 17162240 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:40.874374+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 17162240 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:41.874510+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb713400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.725841522s of 32.102912903s, submitted: 47
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 51 heartbeat osd_stat(store_statfs(0x4fcccd000/0x0/0x4ffc00000, data 0x14a5850/0x14fd000, compress 0x0/0x0/0x0, omap 0x7465, meta 0x1a28b9b), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 605003 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 15925248 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb713c00
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:42.874614+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 51 handle_osd_map epochs [52,52], i have 51, src has [1,52]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 52 ms_handle_reset con 0x55bdeb713400 session 0x55bdeb7be000
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 52 ms_handle_reset con 0x55bdeb713c00 session 0x55bdea57da40
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 15679488 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdecc4d000
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 52 ms_handle_reset con 0x55bdecc4d000 session 0x55bdeb6cdc00
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdecc4d000
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 52 ms_handle_reset con 0x55bdecc4d000 session 0x55bdeb6cd180
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:43.874735+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 15982592 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdedcb1c00
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:44.874851+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 53 ms_handle_reset con 0x55bdedcb1c00 session 0x55bdec498540
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdedcb1000
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdec890800
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 53 ms_handle_reset con 0x55bdec890800 session 0x55bdec859880
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdecc4d400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 53 ms_handle_reset con 0x55bdecc4d400 session 0x55bdec859dc0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 53 heartbeat osd_stat(store_statfs(0x4fccc4000/0x0/0x4ffc00000, data 0x14a842d/0x1506000, compress 0x0/0x0/0x0, omap 0x8263, meta 0x1a27d9d), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 23871488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:45.874979+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdecc4c000
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 23740416 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:46.875094+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 53 handle_osd_map epochs [53,54], i have 53, src has [1,54]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 54 ms_handle_reset con 0x55bdecc4c000 session 0x55bdec87d500
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 827651 data_alloc: 218103808 data_used: 287
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 23764992 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdec890800
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:47.875243+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 54 heartbeat osd_stat(store_statfs(0x4fa4c3000/0x0/0x4ffc00000, data 0x3ca9a1b/0x3d07000, compress 0x0/0x0/0x0, omap 0x8756, meta 0x1a278aa), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 23748608 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:48.875353+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 54 handle_osd_map epochs [54,55], i have 54, src has [1,55]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 55 ms_handle_reset con 0x55bdedcb1000 session 0x55bdec498fc0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 55 ms_handle_reset con 0x55bdec890800 session 0x55bdeac196c0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 23715840 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:49.875458+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb714400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 23691264 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:50.875636+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 56 ms_handle_reset con 0x55bdeb714400 session 0x55bdeb7be540
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 22691840 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:51.875804+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 56 handle_osd_map epochs [56,57], i have 56, src has [1,57]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.403846741s of 10.037183762s, submitted: 116
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 57 ms_handle_reset con 0x55bdeb712400 session 0x55bdec472380
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 632544 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 57 heartbeat osd_stat(store_statfs(0x4fccb9000/0x0/0x4ffc00000, data 0x14adc8e/0x150f000, compress 0x0/0x0/0x0, omap 0x962f, meta 0x1a269d1), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 57 handle_osd_map epochs [57,58], i have 57, src has [1,58]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 22618112 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:52.875935+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 22585344 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb713c00
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:53.876087+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 58 handle_osd_map epochs [58,59], i have 58, src has [1,59]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 22405120 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 59 ms_handle_reset con 0x55bdeb713c00 session 0x55bdeb5c4000
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:54.876200+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 59 handle_osd_map epochs [60,60], i have 59, src has [1,60]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 22462464 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:55.876308+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdec890800
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 60 ms_handle_reset con 0x55bdeb712400 session 0x55bdeac18fc0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 22478848 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 60 handle_osd_map epochs [60,61], i have 60, src has [1,61]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 61 ms_handle_reset con 0x55bdec890800 session 0x55bdeca17880
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:56.876500+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 645765 data_alloc: 218103808 data_used: 252
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 22478848 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 61 heartbeat osd_stat(store_statfs(0x4fccac000/0x0/0x4ffc00000, data 0x14b338d/0x151b000, compress 0x0/0x0/0x0, omap 0xa290, meta 0x1a25d70), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 61 handle_osd_map epochs [62,62], i have 61, src has [1,62]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:57.876649+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdedcb1000
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 21340160 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:58.876829+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 21340160 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:59.876970+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 62 handle_osd_map epochs [63,63], i have 62, src has [1,63]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 63 ms_handle_reset con 0x55bdedcb1000 session 0x55bdeb5c5340
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb713400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 20217856 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:00.877212+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 63 handle_osd_map epochs [63,64], i have 63, src has [1,64]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 64 ms_handle_reset con 0x55bdeb713400 session 0x55bdeb7bec40
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712c00
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 20054016 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:01.877395+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fcca1000/0x0/0x4ffc00000, data 0x14b748e/0x1527000, compress 0x0/0x0/0x0, omap 0xb30b, meta 0x1a24cf5), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 64 handle_osd_map epochs [64,65], i have 64, src has [1,65]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 65 ms_handle_reset con 0x55bdeb712c00 session 0x55bdeb7bfa40
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712c00
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb713400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.239828110s of 10.334854126s, submitted: 135
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 671123 data_alloc: 218103808 data_used: 4313
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 66 ms_handle_reset con 0x55bdeb713400 session 0x55bdeac18a80
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 18907136 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:02.877560+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb318c00
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 66 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 18800640 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319000
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 67 ms_handle_reset con 0x55bdeb318c00 session 0x55bdeaa55180
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:03.877737+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 67 ms_handle_reset con 0x55bdeb319000 session 0x55bdec82c700
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 67 ms_handle_reset con 0x55bdeb712c00 session 0x55bdeac181c0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 67 ms_handle_reset con 0x55bdeb319400 session 0x55bdeb5c4c40
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319000
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 18612224 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:04.877898+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 67 handle_osd_map epochs [67,68], i have 68, src has [1,68]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 68 ms_handle_reset con 0x55bdeb319000 session 0x55bdeb5c48c0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 18554880 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:05.878072+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 18554880 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:06.878219+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fcc8f000/0x0/0x4ffc00000, data 0x14be5dd/0x153b000, compress 0x0/0x0/0x0, omap 0xc633, meta 0x1a239cd), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 688350 data_alloc: 218103808 data_used: 4313
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 18554880 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:07.878361+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb318c00
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 18636800 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:08.878575+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 70 ms_handle_reset con 0x55bdeb318c00 session 0x55bdeb7bfa40
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 18489344 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:09.878697+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 18448384 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 71 ms_handle_reset con 0x55bdeb319400 session 0x55bdeb5c41c0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:10.878868+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 18374656 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb713400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 71 ms_handle_reset con 0x55bdeb713400 session 0x55bdeabc6a80
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:11.879079+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.132811546s of 10.004205704s, submitted: 173
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 72 ms_handle_reset con 0x55bdeb712400 session 0x55bdeb5c5880
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 696007 data_alloc: 218103808 data_used: 4313
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 72 heartbeat osd_stat(store_statfs(0x4fcc8a000/0x0/0x4ffc00000, data 0x14c1831/0x1542000, compress 0x0/0x0/0x0, omap 0xd688, meta 0x1a22978), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 18284544 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:12.879247+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 72 heartbeat osd_stat(store_statfs(0x4fcc86000/0x0/0x4ffc00000, data 0x14c2e1f/0x1543000, compress 0x0/0x0/0x0, omap 0xda0f, meta 0x1a225f1), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 18284544 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:13.879412+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 73 ms_handle_reset con 0x55bdeb712400 session 0x55bdeaa49340
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb318c00
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 73 ms_handle_reset con 0x55bdeb318c00 session 0x55bdeb5c5180
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319000
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 18219008 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:14.879553+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 74 ms_handle_reset con 0x55bdeb319000 session 0x55bdec8581c0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 74 ms_handle_reset con 0x55bdeb319400 session 0x55bdeaafaa80
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb713400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712c00
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 17842176 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:15.879698+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 74 handle_osd_map epochs [74,75], i have 74, src has [1,75]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 75 ms_handle_reset con 0x55bdeb712c00 session 0x55bdeaa54c40
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 17743872 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:16.879912+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 75 handle_osd_map epochs [75,76], i have 75, src has [1,76]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 76 ms_handle_reset con 0x55bdeb713400 session 0x55bdeaafb180
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 706321 data_alloc: 218103808 data_used: 12435
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 17760256 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb318c00
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:17.880204+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319000
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 77 heartbeat osd_stat(store_statfs(0x4fcc7e000/0x0/0x4ffc00000, data 0x14c806a/0x154c000, compress 0x0/0x0/0x0, omap 0xed74, meta 0x1a2128c), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 17760256 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:18.880476+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 17760256 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:19.881238+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 17743872 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:20.881430+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 78 ms_handle_reset con 0x55bdeb319400 session 0x55bdec473880
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 17686528 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:21.881630+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 79 ms_handle_reset con 0x55bdeb712400 session 0x55bdec843880
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 724116 data_alloc: 218103808 data_used: 12517
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 79 handle_osd_map epochs [79,80], i have 79, src has [1,80]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.947123528s of 10.242585182s, submitted: 149
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 17645568 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:22.881873+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdec890800
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 80 heartbeat osd_stat(store_statfs(0x4fbacb000/0x0/0x4ffc00000, data 0x14cd6b4/0x155b000, compress 0x0/0x0/0x0, omap 0xfb85, meta 0x2bc047b), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 80 ms_handle_reset con 0x55bdec890800 session 0x55bdec87c700
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 17522688 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:23.882063+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bded5fbc00
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 80 ms_handle_reset con 0x55bded5fbc00 session 0x55bdeac181c0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 81 ms_handle_reset con 0x55bdeb319400 session 0x55bdec498540
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 81 heartbeat osd_stat(store_statfs(0x4fbacb000/0x0/0x4ffc00000, data 0x14cd6b4/0x155b000, compress 0x0/0x0/0x0, omap 0xfc0f, meta 0x2bc03f1), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 17367040 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:24.882350+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 17227776 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 82 ms_handle_reset con 0x55bdeb712400 session 0x55bdec472540
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:25.882549+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb713400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 83 ms_handle_reset con 0x55bdeb713400 session 0x55bdec87dc00
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:26.882764+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 737140 data_alloc: 218103808 data_used: 12517
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:27.882973+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:28.883111+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:29.883242+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 83 heartbeat osd_stat(store_statfs(0x4fbac6000/0x0/0x4ffc00000, data 0x14d1ced/0x1564000, compress 0x0/0x0/0x0, omap 0x10855, meta 0x2bbf7ab), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread fragmentation_score=0.000134 took=0.001228s
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdec890800
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 83 ms_handle_reset con 0x55bdec890800 session 0x55bdeb6cd880
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bded5fb800
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bded5fb400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:30.883362+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 16850944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 83 ms_handle_reset con 0x55bded5fb400 session 0x55bdeb6cc1c0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 83 ms_handle_reset con 0x55bded5fb800 session 0x55bdeca16a80
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bded5fb400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 83 ms_handle_reset con 0x55bded5fb400 session 0x55bdeca161c0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 83 ms_handle_reset con 0x55bdeb319400 session 0x55bdec82dc00
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 83 ms_handle_reset con 0x55bdeb712400 session 0x55bdeaa54540
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb713400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 83 ms_handle_reset con 0x55bdeb713400 session 0x55bdeaafa700
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 83 ms_handle_reset con 0x55bdeb319400 session 0x55bdeaa55180
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:31.883497+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 17137664 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 83 heartbeat osd_stat(store_statfs(0x4fbac8000/0x0/0x4ffc00000, data 0x14d1ced/0x1564000, compress 0x0/0x0/0x0, omap 0x10aec, meta 0x2bbf514), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 83 ms_handle_reset con 0x55bdeb712400 session 0x55bdeb6cc380
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 737721 data_alloc: 218103808 data_used: 12571
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bded5fb400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.843376160s of 10.071531296s, submitted: 98
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 84 ms_handle_reset con 0x55bded5fb400 session 0x55bdec4981c0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bded5fb800
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 84 ms_handle_reset con 0x55bded5fb800 session 0x55bdec499a40
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:32.883607+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 17154048 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:33.884721+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 17154048 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdec890800
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bded5fb000
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 84 heartbeat osd_stat(store_statfs(0x4fba9e000/0x0/0x4ffc00000, data 0x14f71e5/0x158c000, compress 0x0/0x0/0x0, omap 0x10e75, meta 0x2bbf18b), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:34.884984+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 16932864 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:35.885217+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 16932864 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 84 heartbeat osd_stat(store_statfs(0x4fba9e000/0x0/0x4ffc00000, data 0x14f71e5/0x158c000, compress 0x0/0x0/0x0, omap 0x10e75, meta 0x2bbf18b), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:36.885442+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 16932864 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb714c00
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 84 ms_handle_reset con 0x55bdeb714c00 session 0x55bdeb5c48c0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 745515 data_alloc: 218103808 data_used: 15131
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:37.885646+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 16785408 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 84 heartbeat osd_stat(store_statfs(0x4fbaa0000/0x0/0x4ffc00000, data 0x14f71e5/0x158c000, compress 0x0/0x0/0x0, omap 0x10e75, meta 0x2bbf18b), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 84 handle_osd_map epochs [84,85], i have 85, src has [1,85]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 85 ms_handle_reset con 0x55bdeb319400 session 0x55bdeaafa380
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bded5fb400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 85 ms_handle_reset con 0x55bded5fb400 session 0x55bdec4988c0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 85 ms_handle_reset con 0x55bdeb712400 session 0x55bdec843500
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bded5fb800
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 85 ms_handle_reset con 0x55bded5fb800 session 0x55bdec87ca80
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdecc20c00
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:38.885781+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 15728640 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 86 ms_handle_reset con 0x55bdecc20c00 session 0x55bdec87cfc0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 86 heartbeat osd_stat(store_statfs(0x4fba94000/0x0/0x4ffc00000, data 0x14fa1d3/0x1593000, compress 0x0/0x0/0x0, omap 0x11ae1, meta 0x2bbe51f), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:39.885932+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 15736832 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:40.886097+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 87 heartbeat osd_stat(store_statfs(0x4fba8f000/0x0/0x4ffc00000, data 0x14fb7bc/0x1596000, compress 0x0/0x0/0x0, omap 0x11d7a, meta 0x2bbe286), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:41.886335+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 15638528 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 758109 data_alloc: 218103808 data_used: 15643
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.955635071s of 10.048162460s, submitted: 84
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 87 ms_handle_reset con 0x55bdeb319400 session 0x55bdeca176c0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:42.886511+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 15630336 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:43.886655+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 15630336 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 88 ms_handle_reset con 0x55bdeb712400 session 0x55bdeb7bfa40
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:44.886812+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 15622144 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 88 ms_handle_reset con 0x55bdec890800 session 0x55bdec87d6c0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 88 ms_handle_reset con 0x55bded5fb000 session 0x55bdeac19880
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bded5fac00
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:45.887027+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 15572992 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 89 ms_handle_reset con 0x55bded5fac00 session 0x55bdeca17dc0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 89 heartbeat osd_stat(store_statfs(0x4fbab5000/0x0/0x4ffc00000, data 0x14da003/0x1575000, compress 0x0/0x0/0x0, omap 0x12554, meta 0x2bbdaac), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:46.888009+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 15572992 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 759964 data_alloc: 218103808 data_used: 14173
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:47.888398+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 15564800 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 90 ms_handle_reset con 0x55bdeb318c00 session 0x55bdec87ddc0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 90 ms_handle_reset con 0x55bdeb319000 session 0x55bdeb5c5180
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:48.888523+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 15564800 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdecc21c00
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 90 ms_handle_reset con 0x55bdecc21c00 session 0x55bdec86b6c0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:49.888744+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 15564800 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:50.889198+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 15564800 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 90 ms_handle_reset con 0x55bdeb712400 session 0x55bdeaafbdc0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdec890800
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:51.889695+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 15564800 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 91 ms_handle_reset con 0x55bdec890800 session 0x55bdeb7be700
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 91 heartbeat osd_stat(store_statfs(0x4fbab6000/0x0/0x4ffc00000, data 0x14db4e7/0x1576000, compress 0x0/0x0/0x0, omap 0x1293b, meta 0x2bbd6c5), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 763859 data_alloc: 218103808 data_used: 14091
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:52.889832+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:53.890192+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:54.890489+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 92 heartbeat osd_stat(store_statfs(0x4fbaad000/0x0/0x4ffc00000, data 0x14ddfa2/0x157b000, compress 0x0/0x0/0x0, omap 0x12e65, meta 0x2bbd19b), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:55.890708+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 92 heartbeat osd_stat(store_statfs(0x4fbaad000/0x0/0x4ffc00000, data 0x14ddfa2/0x157b000, compress 0x0/0x0/0x0, omap 0x12e65, meta 0x2bbd19b), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:56.891045+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.702639580s of 14.894862175s, submitted: 114
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:57.891261+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:58.891544+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:59.891780+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:00.891983+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:01.892233+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:02.892395+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:03.892552+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:04.892718+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:05.892856+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:06.893090+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:07.893383+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:08.893573+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:09.893812+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:10.894194+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:11.894434+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:12.894621+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:13.894799+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:14.894929+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:15.895094+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:16.895337+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:17.895498+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:18.895664+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:19.895848+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:20.896001+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:21.896111+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:22.896274+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:23.896375+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:24.896502+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:25.896621+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:26.896799+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:27.896939+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:28.897068+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:29.897244+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:30.897406+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:31.897556+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:32.897674+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:33.897805+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:34.897939+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:35.898069+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:36.898207+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:37.898356+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:38.898475+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:39.898674+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:40.898799+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:41.898990+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:42.899198+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:43.899333+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:44.899493+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:45.899659+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:46.899802+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:47.900025+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:48.900180+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:49.900294+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:50.900446+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:51.900626+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:52.900816+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:53.900983+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:54.901157+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:55.901377+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:56.901529+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:57.901710+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:58.901881+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:59.902011+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:00.902158+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:01.902285+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:02.902416+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:03.902612+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:04.902742+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:05.902884+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:06.903011+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:07.903123+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:41 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:41 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 15540224 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: do_command 'config diff' '{prefix=config diff}'
Jan 29 09:35:41 compute-0 ceph-osd[88193]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 29 09:35:41 compute-0 ceph-osd[88193]: do_command 'config show' '{prefix=config show}'
Jan 29 09:35:41 compute-0 ceph-osd[88193]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 29 09:35:41 compute-0 ceph-osd[88193]: do_command 'counter dump' '{prefix=counter dump}'
Jan 29 09:35:41 compute-0 ceph-osd[88193]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:08.903271+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: do_command 'counter schema' '{prefix=counter schema}'
Jan 29 09:35:41 compute-0 ceph-osd[88193]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 15155200 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:35:41 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:09.903388+0000)
Jan 29 09:35:41 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 15081472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:41 compute-0 ceph-osd[88193]: do_command 'log dump' '{prefix=log dump}'
Jan 29 09:35:41 compute-0 rsyslogd[998]: imjournal from <np0005600302:ceph-osd>: begin to drop messages due to rate-limiting
Jan 29 09:35:41 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14744 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:41 compute-0 ceph-mon[75183]: from='client.14734 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:41 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2333061109' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 29 09:35:41 compute-0 ceph-mon[75183]: pgmap v796: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:41 compute-0 ceph-mon[75183]: from='client.14740 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:41 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3344533127' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 29 09:35:41 compute-0 ceph-mon[75183]: from='client.14744 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:41 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 29 09:35:41 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3775821635' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 29 09:35:41 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14748 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 29 09:35:42 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2252401011' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 29 09:35:42 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14752 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:42 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3775821635' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 29 09:35:42 compute-0 ceph-mon[75183]: from='client.14748 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:42 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2252401011' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 29 09:35:42 compute-0 ceph-mon[75183]: from='client.14752 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:35:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 29 09:35:42 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1748698226' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 29 09:35:42 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14756 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v797: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 29 09:35:43 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/986204682' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 29 09:35:43 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14760 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:43 compute-0 nova_compute[236255]: 2026-01-29 09:35:43.554 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:35:43 compute-0 nova_compute[236255]: 2026-01-29 09:35:43.555 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 29 09:35:43 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1748698226' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 29 09:35:43 compute-0 ceph-mon[75183]: from='client.14756 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:43 compute-0 ceph-mon[75183]: pgmap v797: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:43 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/986204682' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 29 09:35:43 compute-0 ceph-mon[75183]: from='client.14760 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:43 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14762 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 29 09:35:43 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/588833570' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 29 09:35:43 compute-0 crontab[243901]: (root) LIST (root)
Jan 29 09:35:44 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14766 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 29 09:35:44 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/232841502' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 29 09:35:44 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14770 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:44 compute-0 ceph-mon[75183]: from='client.14762 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:44 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/588833570' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 29 09:35:44 compute-0 ceph-mon[75183]: from='client.14766 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:44 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/232841502' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 29 09:35:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v798: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:44 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14774 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.781093 4 0.000026
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000072 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.781653 4 0.000027
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.781332 4 0.000028
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000010 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000010 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000012 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000053 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000059 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000051 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.781559 4 0.000029
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.781494 4 0.000025
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.783899 4 0.000025
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000034 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000045 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.781925 4 0.000020
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000031 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000039 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000085 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.784119 4 0.000024
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000017 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000061 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=29/30 n=0 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=43 pruub=9.044003487s) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering pruub 73.670753479s@ mbc={}] exit Started/Primary/Peering/WaitUpThru 0.785316 3 0.000145
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=29/30 n=0 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=43 pruub=9.044003487s) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering pruub 73.670753479s@ mbc={}] exit Started/Primary/Peering 0.785427 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=29/30 n=0 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=43 pruub=9.044003487s) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown pruub 73.670753479s@ mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=43/44 n=0 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.782493 4 0.000026
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.784380 4 0.000031
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000065 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.784332 4 0.000017
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000009 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000053 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.783033 4 0.000027
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000018 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000066 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000041 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.782914 4 0.000030
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.782905 4 0.000026
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000010 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000102 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.784384 4 0.000023
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000036 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.783434 4 0.000049
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000016 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.783462 4 0.000024
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000019 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000338 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000029 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000106 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000064 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000102 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.782217 4 0.000030
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000051 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000026 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000178 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.784944 4 0.000035
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000075 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000018 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000480 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.784974 4 0.000019
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000525 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000014 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.784983 4 0.000027
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000050 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000013 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.783864 4 0.000032
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000076 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000122 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000172 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=29/30 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004506 3 0.000163
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004512 3 0.000174
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004292 3 0.000127
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004337 3 0.000141
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004275 3 0.000113
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004447 3 0.000222
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006326 3 0.000130
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006275 3 0.000151
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006444 3 0.000178
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006347 3 0.000215
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006457 3 0.000144
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006563 3 0.000122
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006499 3 0.000092
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006534 3 0.000129
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006586 3 0.000111
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006635 3 0.000255
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=43/44 n=0 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=43/44 n=0 ec=29/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006614 3 0.000064
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=43/44 n=0 ec=29/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=43/44 n=0 ec=29/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=43/44 n=0 ec=29/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006670 3 0.000161
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006682 3 0.000194
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006707 3 0.000117
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006751 3 0.000137
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006867 3 0.000122
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006848 3 0.000181
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006881 3 0.000118
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006755 3 0.000180
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006688 3 0.000132
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006858 3 0.000655
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006359 3 0.000679
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006206 3 0.000711
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007111 3 0.001019
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006199 3 0.000676
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=29/29 les/c/f=30/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006120 3 0.000490
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/29 les/c/f=44/30/0 sis=43) [1] r=0 lpr=43 pi=[29,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:34.174125+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 60399616 unmapped: 1458176 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 44 heartbeat osd_stat(store_statfs(0x4fe0ec000/0x0/0x4ffc00000, data 0x99e76/0xdc000, compress 0x0/0x0/0x0, omap 0x3420, meta 0x1a2cbe0), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:35.174283+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 60481536 unmapped: 1376256 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:36.174430+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 60481536 unmapped: 1376256 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 44 handle_osd_map epochs [45,45], i have 44, src has [1,45]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.19(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000198 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000027 1 0.000044
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000267 1 0.000053
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.18(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000134 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000021 1 0.000038
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000174 1 0.000046
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1a(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000045 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000066 1 0.000031
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1d(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000050 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000011
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000039 1 0.000037
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.c(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000056 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000014
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000034 1 0.000026
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.f(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000046 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000013
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000039 1 0.000025
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.9(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000039 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000012
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000047
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.6(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000014
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000032 1 0.000026
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000061 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000014
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 1 0.000028
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.7(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000013
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000030
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.4(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000035 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000013
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000039 1 0.000028
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.5(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000013
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000026
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.3(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000043 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000013
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000046 1 0.000026
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.a(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000030 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000031 1 0.000022
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.d(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000013
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000027
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.9(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000049 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000020
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000023 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000048 1 0.000051
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.16(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000037 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000013
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000032 1 0.000028
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.15(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000037 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000017
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000035 1 0.000027
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.12(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000034 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000012
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.13(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000054 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000014
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000058 1 0.000035
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.17(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000034 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000018
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000038 1 0.000030
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000660 1 0.000025
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.11(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000131 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000027
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000156 1 0.000042
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.1b(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000046 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000012
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000145 1 0.000036
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.995837 1 0.000076
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.000355 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.000426 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.000455 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003906250s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.418182373s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003882408s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.418182373s@ mbc={}] exit Reset 0.000046 1 0.000086
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003882408s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.418182373s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003882408s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.418182373s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003882408s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.418182373s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003882408s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.418182373s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003882408s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.418182373s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.157276 13 0.000086
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.163903 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.163950 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.163969 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841853142s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.256263733s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841838837s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256263733s@ mbc={}] exit Reset 0.000026 1 0.000046
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841838837s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256263733s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841838837s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256263733s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841838837s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256263733s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841838837s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256263733s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841838837s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256263733s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.996196 1 0.000026
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.000516 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.000583 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.000625 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003689766s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.418220520s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.157406 13 0.000087
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003646851s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.418220520s@ mbc={}] exit Reset 0.000058 1 0.000072
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003646851s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.418220520s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.164003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003646851s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.418220520s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003646851s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.418220520s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.164049 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003646851s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.418220520s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003646851s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.418220520s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.164088 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.157462 13 0.000101
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.164119 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.164179 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.996279 1 0.000023
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.000581 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.000638 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.164222 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.140034 13 0.000120
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.162526 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.162594 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841897964s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.256576538s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.000713 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841745377s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.256446838s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003516197s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.418243408s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.162620 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859436035s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.274192810s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841834068s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256576538s@ mbc={}] exit Reset 0.000136 1 0.000188
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841834068s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256576538s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841701508s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256446838s@ mbc={}] exit Reset 0.000096 1 0.000139
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859423637s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.274192810s@ mbc={}] exit Reset 0.000034 1 0.000099
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841834068s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256576538s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841701508s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256446838s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859423637s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.274192810s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841701508s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256446838s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859423637s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.274192810s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859423637s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.274192810s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859423637s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.274192810s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.859423637s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.274192810s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.157983 13 0.000076
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.164690 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.164738 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.164760 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841225624s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.256187439s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841215134s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256187439s@ mbc={}] exit Reset 0.000025 1 0.000046
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841215134s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256187439s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841215134s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256187439s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841215134s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256187439s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841215134s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256187439s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841215134s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256187439s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841701508s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256446838s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841701508s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256446838s@ mbc={}] exit Start 0.000231 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841701508s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256446838s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.158198 13 0.000058
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.164848 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.164907 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.164928 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841834068s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256576538s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841054916s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.256141663s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841834068s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256576538s@ mbc={}] exit Start 0.000304 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841834068s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256576538s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.994264 1 0.000076
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841039658s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256141663s@ mbc={}] exit Reset 0.000029 1 0.000046
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.000673 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841039658s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256141663s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841039658s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256141663s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.000740 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841039658s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256141663s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841039658s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256141663s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.841039658s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256141663s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.000821 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005534172s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.420669556s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005519867s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.420669556s@ mbc={}] exit Reset 0.000035 1 0.000061
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005519867s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.420669556s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005519867s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.420669556s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005519867s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.420669556s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005519867s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.420669556s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005519867s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.420669556s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.158196 13 0.000248
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.164871 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003029823s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.418243408s@ mbc={}] exit Reset 0.000507 1 0.000573
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.165205 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.158414 13 0.000065
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003029823s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.418243408s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003029823s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.418243408s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.165167 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003029823s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.418243408s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.165247 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003029823s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.418243408s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.165236 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.003029823s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.418243408s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.165274 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840822220s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.256080627s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840955734s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.256217957s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.994115 1 0.000033
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840809822s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256080627s@ mbc={}] exit Reset 0.000027 1 0.000052
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.000691 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840809822s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256080627s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840927124s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256217957s@ mbc={}] exit Reset 0.000056 1 0.000092
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.000750 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840809822s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256080627s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840809822s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256080627s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840809822s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256080627s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840809822s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256080627s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840927124s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256217957s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840927124s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256217957s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840927124s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256217957s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840927124s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256217957s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840927124s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256217957s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.993957 1 0.000052
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.000635 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.000738 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.158618 13 0.000071
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.165556 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.165622 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.000857 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.165648 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005935669s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.421333313s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840617180s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.256027222s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.994279 1 0.000048
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005921364s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421333313s@ mbc={}] exit Reset 0.000030 1 0.000055
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005921364s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421333313s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.000819 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005921364s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421333313s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005921364s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421333313s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840600014s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256027222s@ mbc={}] exit Reset 0.000037 1 0.000058
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840600014s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256027222s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005921364s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421333313s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840600014s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256027222s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005921364s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421333313s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840600014s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256027222s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840600014s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256027222s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.840600014s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.256027222s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.994200 1 0.000044
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.000823 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.000868 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.000887 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005681992s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.421249390s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.000866 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005669594s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421249390s@ mbc={}] exit Reset 0.000027 1 0.000048
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005669594s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421249390s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005669594s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421249390s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005669594s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421249390s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005669594s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421249390s@ mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005669594s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421249390s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.001063 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005396843s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.421043396s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.994146 1 0.000049
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.000859 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.000932 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005373001s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421043396s@ mbc={}] exit Reset 0.000056 1 0.000261
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.000958 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005373001s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421043396s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005373001s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421043396s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005373001s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421043396s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005373001s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421043396s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005716324s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.421417236s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005373001s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421043396s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005702019s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421417236s@ mbc={}] exit Reset 0.000030 1 0.000054
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005702019s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421417236s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005702019s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421417236s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005702019s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421417236s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005702019s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421417236s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005702019s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421417236s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.994127 1 0.000049
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.000848 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.000924 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.000951 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005743980s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.421562195s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005726814s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421562195s@ mbc={}] exit Reset 0.000048 1 0.000053
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005726814s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421562195s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005726814s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421562195s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005726814s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421562195s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005726814s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421562195s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005726814s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421562195s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.159433 13 0.000068
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.166449 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.994206 1 0.000054
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.166512 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.000962 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.001032 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.166548 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.001066 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839838028s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255844116s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.159461 13 0.000803
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.166565 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.166647 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839819908s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255844116s@ mbc={}] exit Reset 0.000034 1 0.000072
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839819908s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255844116s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.166677 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839819908s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255844116s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005626678s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.421653748s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839819908s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255844116s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839819908s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255844116s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839819908s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255844116s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005608559s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421653748s@ mbc={}] exit Reset 0.000047 1 0.000088
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005608559s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421653748s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005608559s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421653748s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005608559s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421653748s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839775085s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255844116s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005608559s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421653748s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005608559s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421653748s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839753151s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255844116s@ mbc={}] exit Reset 0.000051 1 0.000083
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839753151s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255844116s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839753151s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255844116s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839753151s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255844116s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839753151s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255844116s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839753151s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255844116s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.159510 13 0.000076
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.994158 1 0.000026
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.166537 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.001058 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.001116 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.166868 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.001139 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.159442 13 0.000076
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.166903 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.166986 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.167052 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005732536s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.421928406s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839701653s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255912781s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005712509s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421928406s@ mbc={}] exit Reset 0.000044 1 0.000072
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005712509s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421928406s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839684486s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255912781s@ mbc={}] exit Reset 0.000043 1 0.000088
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005712509s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421928406s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839684486s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255912781s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.167111 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005712509s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421928406s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839684486s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255912781s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005712509s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421928406s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839684486s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255912781s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839684486s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255912781s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005712509s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421928406s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839684486s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255912781s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839718819s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255989075s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839698792s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255989075s@ mbc={}] exit Reset 0.000047 1 0.000115
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839698792s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255989075s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839698792s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255989075s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839698792s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255989075s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839698792s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255989075s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839698792s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255989075s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.993764 1 0.000044
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.159843 13 0.000090
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.000676 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.167150 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.000874 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.167221 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.167255 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.001840 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839388847s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255767822s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.994236 1 0.000051
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.001135 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839371681s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255767822s@ mbc={}] exit Reset 0.000038 1 0.000074
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839371681s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255767822s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004714966s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.421127319s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.001251 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839371681s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255767822s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839371681s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255767822s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839371681s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255767822s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.001285 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839371681s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255767822s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004692078s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421127319s@ mbc={}] exit Reset 0.000050 1 0.001143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004692078s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421127319s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004692078s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421127319s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004692078s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421127319s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004692078s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421127319s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004692078s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.421127319s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005585670s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.422073364s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.160154 13 0.000206
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005562782s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422073364s@ mbc={}] exit Reset 0.000078 1 0.000113
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.167471 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005562782s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422073364s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.167534 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005562782s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422073364s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005562782s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422073364s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005562782s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422073364s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.167567 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005562782s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422073364s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.994299 1 0.000050
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.001225 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839193344s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255752563s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.001522 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839136124s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255752563s@ mbc={}] exit Reset 0.000090 1 0.000072
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005764961s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.422416687s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839136124s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255752563s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839136124s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255752563s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839136124s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255752563s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839136124s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255752563s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005743980s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422416687s@ mbc={}] exit Reset 0.000056 1 0.000355
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.839136124s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255752563s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005743980s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422416687s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005743980s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422416687s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005743980s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422416687s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005743980s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422416687s@ mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005743980s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422416687s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.001643 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.001670 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005180359s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.422157288s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.160455 13 0.000087
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.167627 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005159378s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422157288s@ mbc={}] exit Reset 0.000041 1 0.000442
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.167702 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005159378s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422157288s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005159378s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422157288s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005159378s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422157288s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005159378s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422157288s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.167740 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005159378s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422157288s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.838771820s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255821228s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.838751793s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255821228s@ mbc={}] exit Reset 0.000044 1 0.000086
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.838751793s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255821228s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.838751793s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255821228s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.838751793s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255821228s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.838751793s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255821228s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.838751793s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255821228s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.160438 13 0.000060
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.168319 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.168379 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.168403 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.994819 1 0.000050
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.001617 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.001747 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.001766 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.838637352s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255821228s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005061150s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.422256470s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.838616371s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255821228s@ mbc={}] exit Reset 0.000058 1 0.000085
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.838616371s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255821228s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005044937s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422256470s@ mbc={}] exit Reset 0.000033 1 0.000056
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.838616371s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255821228s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005044937s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422256470s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.838616371s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255821228s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005044937s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422256470s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005044937s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422256470s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.838616371s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255821228s@ mbc={}] exit Start 0.000012 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005044937s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422256470s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.838616371s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255821228s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.005044937s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422256470s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.994944 1 0.000047
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.001711 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.001797 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.001821 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004891396s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.422325134s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004875183s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422325134s@ mbc={}] exit Reset 0.000045 1 0.000096
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004875183s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422325134s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004875183s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422325134s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004875183s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422325134s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004875183s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422325134s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004875183s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422325134s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.161824 13 0.000105
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.168887 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.168951 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.168984 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837707520s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255317688s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837686539s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255317688s@ mbc={}] exit Reset 0.000052 1 0.000084
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837686539s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255317688s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837686539s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255317688s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837686539s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255317688s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837686539s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255317688s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837686539s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255317688s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.995120 1 0.000042
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.001518 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.161434 13 0.000088
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.002059 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.168805 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.169226 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.002089 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.169264 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004628181s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.422424316s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837920189s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255729675s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837900162s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255729675s@ mbc={}] exit Reset 0.000041 1 0.000074
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004602432s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422424316s@ mbc={}] exit Reset 0.000051 1 0.000084
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837900162s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255729675s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004602432s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422424316s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837900162s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255729675s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004602432s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422424316s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837900162s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255729675s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004602432s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422424316s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837900162s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255729675s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004602432s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422424316s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837900162s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255729675s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004602432s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422424316s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.995217 1 0.000040
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.001457 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.001550 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.001579 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.162347 13 0.000181
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.169299 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 2.995157 1 0.000027
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.169630 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.001308 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.169666 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.001507 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004592896s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.422576904s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.001543 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837324142s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255317688s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004566193s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422576904s@ mbc={}] exit Reset 0.000067 1 0.000095
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004566193s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422576904s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837303162s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255317688s@ mbc={}] exit Reset 0.000042 1 0.000074
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004566193s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422576904s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004566193s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422576904s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837303162s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255317688s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004714966s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 81.422737122s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004566193s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422576904s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837303162s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255317688s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837303162s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255317688s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837303162s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255317688s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837303162s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255317688s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004692078s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422737122s@ mbc={}] exit Reset 0.000048 1 0.000092
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004692078s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422737122s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004692078s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422737122s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004692078s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422737122s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004692078s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422737122s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004692078s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422737122s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.162512 13 0.000181
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.169708 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.169859 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.169888 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837179184s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 active pruub 77.255332947s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837144852s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255332947s@ mbc={}] exit Reset 0.000054 1 0.000089
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837144852s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255332947s@ mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837144852s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255332947s@ mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837144852s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255332947s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837144852s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255332947s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=8.837144852s) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY pruub 77.255332947s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=13.004566193s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 81.422576904s@ mbc={}] enter Started/Stray
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009874 2 0.000056
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009535 2 0.000077
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009435 2 0.000023
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009320 2 0.000055
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009278 2 0.000024
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009315 2 0.000021
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009237 2 0.000021
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008903 2 0.000025
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008944 2 0.000025
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009305 2 0.000028
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008894 2 0.000032
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013108 2 0.000040
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013087 2 0.000032
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012948 2 0.000031
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012982 2 0.000023
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013117 2 0.000026
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014075 2 0.000019
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012824 2 0.000042
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014266 2 0.000059
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014568 2 0.000663
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014445 2 0.000050
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017336 2 0.000024
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014286 2 0.000032
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.14(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000112 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000026
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000117 1 0.000048
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.10(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000121 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000021 1 0.000029
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000105 1 0.000044
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.f(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000059 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000015
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000073 1 0.000034
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.d(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000118 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000094 1 0.000024
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000350 1 0.000142
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 45 handle_osd_map epochs [45,45], i have 45, src has [1,45]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.c(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000087 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000019
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000104 1 0.000036
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.d(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000077 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000018
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000090 1 0.000031
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.17(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000042 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000074 1 0.000030
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.e(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000099 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000021
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000079 1 0.000047
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.2(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000070 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000020
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000075 1 0.000050
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.12(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000090 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000022
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000094 1 0.000050
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000094 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000022
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000088 1 0.000044
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.6(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000096 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000019
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000047 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000087 1 0.000084
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.4(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000084 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000014
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000113 1 0.000037
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.9(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000058 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000013
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000094 1 0.000037
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.b(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000064 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000017
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000075 1 0.000039
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.2(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000060 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000012
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000085 1 0.000046
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.5(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000092 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000020
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000070 1 0.000045
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.4(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000071 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000021
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000084 1 0.000039
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.7(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000073 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000013
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000069 1 0.000037
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.8(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000009
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000140 1 0.000047
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1e(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000039 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000050 1 0.000031
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1c(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000102 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000027
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000170 1 0.000048
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1d(unlocked)] enter Initial
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000112 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000024
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000104 1 0.000045
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016203 2 0.000053
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015762 2 0.000038
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015423 2 0.000026
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013800 2 0.000048
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013198 2 0.000032
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012845 2 0.000031
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012676 2 0.000023
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000017 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012007 2 0.000033
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011600 2 0.000040
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012441 2 0.000033
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011175 2 0.000035
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010803 2 0.000032
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010335 2 0.000036
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009980 2 0.000034
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009017 2 0.000036
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008578 2 0.000040
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009780 2 0.000026
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007827 2 0.000027
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fddc9400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006748 2 0.000047
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005478 2 0.000041
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007553 2 0.000021
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007829 2 0.000038
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009823 2 0.000027
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 45 heartbeat osd_stat(store_statfs(0x4fe0e7000/0x0/0x4ffc00000, data 0x9b2fe/0xdf000, compress 0x0/0x0/0x0, omap 0x3420, meta 0x1a2cbe0), peers [0,2] op hist [0,0,0,0,0,0,0,0,10])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:37.174611+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 204800 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 45 handle_osd_map epochs [45,46], i have 45, src has [1,46]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.971852303s of 10.239696503s, submitted: 378
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 45 handle_osd_map epochs [46,46], i have 46, src has [1,46]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974220 2 0.000038
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.987206 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.973523 2 0.000156
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.981189 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.d( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1e( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974426 2 0.000031
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.987761 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974375 2 0.000034
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974507 2 0.000076
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988726 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974574 2 0.000093
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.990118 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.f( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.973745 2 0.000484
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983700 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.2( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974309 2 0.000053
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985248 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974357 2 0.000103
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.984845 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974227 2 0.000031
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.4( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.982928 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974406 2 0.000054
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985718 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974233 2 0.000029
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.982188 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974334 2 0.000074
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.7( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983470 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.5( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974601 2 0.000171
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.987167 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974546 2 0.000039
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.984662 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974364 2 0.000048
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.984262 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.9( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.b( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974119 2 0.000241
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.982133 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974903 2 0.000052
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.8( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.987703 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.17( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974886 2 0.000037
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975195 2 0.000055
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986630 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991578 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986494 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.12( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975208 2 0.000063
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991129 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.10( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.14( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974499 2 0.000029
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.980126 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1d( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974577 2 0.000027
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.981535 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995632 2 0.000072
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1c( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996732 2 0.000055
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011079 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.17( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010105 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.1b( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998513 2 0.000032
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996594 2 0.000037
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011858 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011470 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.13( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.12( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996369 2 0.000062
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011023 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998941 2 0.000029
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012124 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.11( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.15( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999429 2 0.000033
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012482 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999729 2 0.000021
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012762 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.9( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999940 2 0.000023
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013107 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000109 2 0.000052
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013295 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.d( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.a( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999431 2 0.000035
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013585 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.16( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.3( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005239 2 0.000043
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.014264 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.4( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005214 2 0.000028
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.014176 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005581 2 0.000037
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.014555 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.7( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005419 2 0.000028
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.014822 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997596 2 0.000067
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.015017 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.6( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005985 2 0.000025
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.015309 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.9( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006121 2 0.000050
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.015523 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.f( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006351 2 0.000050
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.015715 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.c( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006822 2 0.000031
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.016356 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1a( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006793 2 0.000028
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.016225 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1d( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007341 2 0.000041
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.017526 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.19( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007404 2 0.000055
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.017157 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.18( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1e( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.d( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.f( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.d( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005748 4 0.000130
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.d( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.d( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005628 4 0.000074
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.d( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1e( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005732 4 0.000194
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1e( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1e( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1e( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.4( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005622 4 0.000134
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.7( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.5( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005584 4 0.000065
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.4( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006364 4 0.000066
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.4( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.4( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.4( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006417 4 0.000057
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006435 4 0.000057
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.7( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006420 4 0.000088
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.7( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.7( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.7( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.5( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006447 4 0.000060
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.5( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.5( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.5( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006458 4 0.000059
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.f( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005672 4 0.000070
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.f( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.f( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.f( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.9( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.b( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.2( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.b( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007066 4 0.001753
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.b( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.9( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007098 4 0.001704
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.2( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007404 4 0.001998
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.9( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.2( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.b( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.2( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.b( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.2( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 handle_osd_map epochs [46,46], i have 46, src has [1,46]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.9( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.9( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.8( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.17( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.8( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015853 4 0.000144
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.8( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.8( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000049 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.8( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.12( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.17( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016017 4 0.000059
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.17( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.17( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.17( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.10( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1d( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1c( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.14( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.1b( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.13( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.12( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.11( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.15( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.9( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.a( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.d( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.16( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.3( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.4( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.12( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016354 4 0.000078
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.12( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.17( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.12( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.12( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.7( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.6( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.9( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.f( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1a( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1d( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.19( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.c( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016619 4 0.000716
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.14( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016656 4 0.000130
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.14( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.14( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.14( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1c( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016626 4 0.000076
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1c( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1c( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1c( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.1b( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016342 4 0.000404
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.1b( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.1b( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.1b( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.13( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016223 4 0.000114
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.13( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.13( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.13( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.12( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016282 4 0.000131
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.12( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.12( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.12( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.18( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.15( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016188 4 0.000062
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.15( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.15( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.15( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.11( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016255 4 0.000074
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.9( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015942 4 0.000066
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.11( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.9( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.9( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.11( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.9( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.11( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.a( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015829 4 0.000062
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.a( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.a( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.16( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015727 4 0.000484
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.a( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.d( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015888 4 0.000121
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.16( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.d( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.16( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.16( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.d( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.d( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.3( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015741 4 0.000097
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.3( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.4( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015422 4 0.000062
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015352 4 0.000073
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.4( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.4( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.4( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.17( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016780 4 0.000068
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.17( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.17( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.17( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.10( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017220 4 0.000880
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.7( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015348 4 0.000069
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.10( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.7( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.7( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.10( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.7( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[4.10( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=45/41 les/c/f=46/43/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1d( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017174 4 0.000437
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1d( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1d( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[6.1d( empty local-lis/les=45/46 n=0 ec=43/27 lis/c=45/43 les/c/f=46/44/0 sis=45) [1] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.6( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015291 4 0.000061
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.6( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.6( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.6( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015340 4 0.000088
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.9( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015422 4 0.000061
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.9( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.9( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.9( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.f( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015406 4 0.000064
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.f( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.f( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.f( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1a( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015200 4 0.000054
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1a( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1a( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1a( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.3( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[2.3( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1d( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015212 4 0.000067
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1d( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1d( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.1d( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.19( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015147 4 0.000056
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.19( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.19( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.19( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.c( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015552 4 0.000089
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.c( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.c( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.c( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.18( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.014920 4 0.000313
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.18( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.18( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[5.18( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025884 7 0.000152
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022701 7 0.000070
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000154 1 0.000030
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000445 1 0.000077
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030063 7 0.000495
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029972 7 0.000100
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030438 7 0.000071
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030104 7 0.000056
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000102 1 0.000044
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030165 7 0.000155
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.18( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030982 7 0.000068
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000075 1 0.000027
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030037 7 0.000072
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.16( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029659 7 0.000073
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028693 7 0.000060
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028941 7 0.000078
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029343 7 0.000109
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029179 7 0.000073
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000197 1 0.000067
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028450 7 0.000053
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029057 7 0.000069
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027454 7 0.000274
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027673 7 0.000068
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000255 1 0.000021
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028298 7 0.000066
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027517 7 0.000076
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029332 7 0.000099
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028932 7 0.000095
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000319 1 0.000062
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.11( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000308 1 0.000023
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000355 1 0.000032
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000389 1 0.000027
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000428 1 0.000028
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000480 1 0.000032
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000513 1 0.000038
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000696 1 0.000166
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000548 1 0.000029
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.7( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000636 1 0.000053
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000658 1 0.000031
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000679 1 0.000030
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1d( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000694 1 0.000020
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1e( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000757 1 0.000089
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000735 1 0.000019
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.8( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000801 1 0.000114
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.5( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035916 7 0.000298
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032919 7 0.000045
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033912 7 0.000058
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036456 7 0.000060
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034057 7 0.000077
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035728 7 0.000062
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035252 7 0.000062
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035034 7 0.000094
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035652 7 0.000050
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033544 7 0.000072
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035969 7 0.000058
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035386 7 0.000076
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034924 7 0.000062
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036036 7 0.000050
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034032 7 0.000055
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033883 7 0.000056
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000621 1 0.000035
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.037145 7 0.000050
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036257 7 0.000084
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035735 7 0.000065
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000961 1 0.000022
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001028 1 0.000037
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001035 1 0.000026
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000839 1 0.000030
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000859 1 0.000016
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000856 1 0.000018
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000885 1 0.000055
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000891 1 0.000019
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000918 1 0.000019
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000927 1 0.000081
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000967 1 0.000024
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000954 1 0.000022
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000958 1 0.000018
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000960 1 0.000016
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001141 1 0.000175
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001005 1 0.000021
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001019 1 0.000022
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001000 1 0.000372
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1f( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.018003 1 0.000045
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1f( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.018193 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1f( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.040926 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.12( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.030645 1 0.000032
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.12( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.031157 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.12( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.057071 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.030060 1 0.000057
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.030234 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.060645 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.16( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.034387 1 0.000058
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.16( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.034541 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.16( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.065020 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.15( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.043028 1 0.000036
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.15( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.043410 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.15( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.073542 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.11( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.048888 1 0.000065
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.11( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.049127 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.11( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.079156 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.056129 1 0.000033
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.056495 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.086694 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.e( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.063694 1 0.000046
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.e( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.064056 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.e( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.094131 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.8( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.070793 1 0.000054
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.8( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.071196 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.8( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.100898 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.2( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.078384 1 0.000032
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.2( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.078854 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.2( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.107581 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.a( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.085534 1 0.000054
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.a( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.086012 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.a( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.114996 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.5( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.092915 1 0.000039
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.5( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.093586 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.5( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.122967 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.100362 1 0.000029
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.100926 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.130152 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1c( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.107722 1 0.000040
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1c( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.108463 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1c( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.139473 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.7( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.114925 1 0.000047
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.7( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.115535 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.7( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.144637 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.c( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.122159 1 0.000045
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.c( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.122854 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.c( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.151336 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1a( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.129478 1 0.000036
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1a( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.130183 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1a( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.157683 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1d( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.136961 1 0.000042
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1d( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.137676 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1d( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.165395 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1e( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.144274 1 0.000032
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1e( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.145011 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1e( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.172564 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.151612 1 0.000034
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.152405 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.180745 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.8( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.158937 1 0.000022
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.8( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.159704 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.8( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.188671 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.166532 1 0.000024
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.167375 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.196743 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.168671 1 0.000182
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.169337 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.205518 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1b( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.175708 1 0.000087
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1b( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.176753 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1b( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.209702 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.183377 1 0.000038
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.184460 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.218408 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.13( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.190682 1 0.000038
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.13( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.191762 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.13( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.228254 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.a( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.197741 1 0.000034
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.a( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.198631 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.a( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.232733 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.f( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.205111 1 0.000033
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.f( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.206005 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.f( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.241756 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.212421 1 0.000031
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.213316 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.248600 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.6( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.219726 1 0.000030
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.6( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.220650 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.6( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.255750 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.9( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.227135 1 0.000027
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.9( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.228065 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.9( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.264071 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.234447 1 0.000032
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.235400 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.270821 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.18( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.241857 1 0.000032
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.18( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.242887 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.18( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.276470 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.3( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.249240 1 0.000021
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.3( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.250241 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.3( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.285201 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.c( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.256668 1 0.000022
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.c( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.257659 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.c( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.293731 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1f( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.264283 1 0.000020
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1f( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.265277 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.1f( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.299334 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1b( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.271553 1 0.000018
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1b( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.272555 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1b( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.306465 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.6( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.278878 1 0.000040
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.6( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.280071 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.6( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.315749 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.286146 1 0.000035
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.287187 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.323478 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.4( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.293548 1 0.000024
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.4( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.294603 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[7.4( empty lb MIN local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.330368 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.17( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.300786 1 0.000013
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.17( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.301902 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.17( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.339081 0 0.000000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:38.174800+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 1966080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 355294 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:39.174949+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 1867776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:40.175090+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 7 sent 5 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:09.463983+0000 osd.1 (osd.1) 6 : cluster [DBG] 7.1e scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:09.474598+0000 osd.1 (osd.1) 7 : cluster [DBG] 7.1e scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 1859584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 7)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:09.463983+0000 osd.1 (osd.1) 6 : cluster [DBG] 7.1e scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:09.474598+0000 osd.1 (osd.1) 7 : cluster [DBG] 7.1e scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:41.175324+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 1859584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e4000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:42.175500+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 1859584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e4000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e4000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:43.175691+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 1859584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 359227 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:44.175892+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 1900544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:45.176048+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 1900544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:46.176213+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 1892352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:47.176408+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 1892352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.815997124s of 10.003100395s, submitted: 235
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e4000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:48.176580+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 9 sent 7 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:17.558006+0000 osd.1 (osd.1) 8 : cluster [DBG] 7.1d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:17.568339+0000 osd.1 (osd.1) 9 : cluster [DBG] 7.1d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 1974272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 360776 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 9)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:17.558006+0000 osd.1 (osd.1) 8 : cluster [DBG] 7.1d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:17.568339+0000 osd.1 (osd.1) 9 : cluster [DBG] 7.1d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:49.176858+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 11 sent 9 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:18.541437+0000 osd.1 (osd.1) 10 : cluster [DBG] 3.19 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:18.551993+0000 osd.1 (osd.1) 11 : cluster [DBG] 3.19 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 1892352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 11)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:18.541437+0000 osd.1 (osd.1) 10 : cluster [DBG] 3.19 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:18.551993+0000 osd.1 (osd.1) 11 : cluster [DBG] 3.19 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:50.177056+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 1892352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:51.177392+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 13 sent 11 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:20.495690+0000 osd.1 (osd.1) 12 : cluster [DBG] 3.1a scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:20.506253+0000 osd.1 (osd.1) 13 : cluster [DBG] 3.1a scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 1884160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 13)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:20.495690+0000 osd.1 (osd.1) 12 : cluster [DBG] 3.1a scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:20.506253+0000 osd.1 (osd.1) 13 : cluster [DBG] 3.1a scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:52.178354+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 1884160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:53.179230+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 1875968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 368015 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:54.179898+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 15 sent 13 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:23.458074+0000 osd.1 (osd.1) 14 : cluster [DBG] 7.12 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:23.468678+0000 osd.1 (osd.1) 15 : cluster [DBG] 7.12 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 1875968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 15)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:23.458074+0000 osd.1 (osd.1) 14 : cluster [DBG] 7.12 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:23.468678+0000 osd.1 (osd.1) 15 : cluster [DBG] 7.12 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:55.180556+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 1867776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:56.181194+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 1867776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:57.181710+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 1859584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:58.182203+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 1859584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 368015 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:59.182631+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.853828430s of 11.928766251s, submitted: 7
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 1818624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:00.183028+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 17 sent 15 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:29.490645+0000 osd.1 (osd.1) 16 : cluster [DBG] 3.14 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:29.501275+0000 osd.1 (osd.1) 17 : cluster [DBG] 3.14 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 17)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:29.490645+0000 osd.1 (osd.1) 16 : cluster [DBG] 3.14 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:29.501275+0000 osd.1 (osd.1) 17 : cluster [DBG] 3.14 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 1810432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:01.183445+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 1794048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:02.183863+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 19 sent 17 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:31.426611+0000 osd.1 (osd.1) 18 : cluster [DBG] 7.10 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:31.436903+0000 osd.1 (osd.1) 19 : cluster [DBG] 7.10 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 19)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:31.426611+0000 osd.1 (osd.1) 18 : cluster [DBG] 7.10 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:31.436903+0000 osd.1 (osd.1) 19 : cluster [DBG] 7.10 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 1794048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:03.184236+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 21 sent 19 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:32.379396+0000 osd.1 (osd.1) 20 : cluster [DBG] 3.13 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:32.389831+0000 osd.1 (osd.1) 21 : cluster [DBG] 3.13 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 21)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:32.379396+0000 osd.1 (osd.1) 20 : cluster [DBG] 3.13 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:32.389831+0000 osd.1 (osd.1) 21 : cluster [DBG] 3.13 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 1794048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 375254 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:04.184559+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 1777664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:05.185174+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 1777664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:06.185403+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 1761280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:07.185655+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 23 sent 21 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:36.348071+0000 osd.1 (osd.1) 22 : cluster [DBG] 7.17 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:36.358654+0000 osd.1 (osd.1) 23 : cluster [DBG] 7.17 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 1753088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 23)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:36.348071+0000 osd.1 (osd.1) 22 : cluster [DBG] 7.17 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:36.358654+0000 osd.1 (osd.1) 23 : cluster [DBG] 7.17 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:08.185909+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 1753088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 380080 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:09.186245+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 25 sent 23 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:38.367906+0000 osd.1 (osd.1) 24 : cluster [DBG] 7.16 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:38.378469+0000 osd.1 (osd.1) 25 : cluster [DBG] 7.16 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 1728512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 25)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:38.367906+0000 osd.1 (osd.1) 24 : cluster [DBG] 7.16 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:38.378469+0000 osd.1 (osd.1) 25 : cluster [DBG] 7.16 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:10.186548+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.851775169s of 10.886636734s, submitted: 10
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 1720320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:11.186776+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:40.377210+0000 osd.1 (osd.1) 26 : cluster [DBG] 3.10 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:40.387760+0000 osd.1 (osd.1) 27 : cluster [DBG] 3.10 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 1712128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 27)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:40.377210+0000 osd.1 (osd.1) 26 : cluster [DBG] 3.10 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:40.387760+0000 osd.1 (osd.1) 27 : cluster [DBG] 3.10 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:12.187047+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 1703936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:13.187334+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:42.376437+0000 osd.1 (osd.1) 28 : cluster [DBG] 7.14 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:42.387084+0000 osd.1 (osd.1) 29 : cluster [DBG] 7.14 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 1703936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 384906 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 29)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:42.376437+0000 osd.1 (osd.1) 28 : cluster [DBG] 7.14 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:42.387084+0000 osd.1 (osd.1) 29 : cluster [DBG] 7.14 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:14.187712+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 1695744 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:15.187963+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 1695744 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:16.188287+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:45.334750+0000 osd.1 (osd.1) 30 : cluster [DBG] 7.b scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:45.345812+0000 osd.1 (osd.1) 31 : cluster [DBG] 7.b scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 1679360 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 31)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:45.334750+0000 osd.1 (osd.1) 30 : cluster [DBG] 7.b scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:45.345812+0000 osd.1 (osd.1) 31 : cluster [DBG] 7.b scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:17.188523+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:46.374682+0000 osd.1 (osd.1) 32 : cluster [DBG] 3.d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:46.385819+0000 osd.1 (osd.1) 33 : cluster [DBG] 3.d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 1671168 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:18.188736+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 33)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:46.374682+0000 osd.1 (osd.1) 32 : cluster [DBG] 3.d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:46.385819+0000 osd.1 (osd.1) 33 : cluster [DBG] 3.d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 1662976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 392139 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:19.188999+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:48.408754+0000 osd.1 (osd.1) 34 : cluster [DBG] 3.b scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:48.419342+0000 osd.1 (osd.1) 35 : cluster [DBG] 3.b scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 1662976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 35)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:48.408754+0000 osd.1 (osd.1) 34 : cluster [DBG] 3.b scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:48.419342+0000 osd.1 (osd.1) 35 : cluster [DBG] 3.b scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:20.189317+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 1662976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:21.189613+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.849021912s of 11.102084160s, submitted: 10
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 1630208 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:22.189839+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:51.479984+0000 osd.1 (osd.1) 36 : cluster [DBG] 3.2 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:51.490623+0000 osd.1 (osd.1) 37 : cluster [DBG] 3.2 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 1630208 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 37)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:51.479984+0000 osd.1 (osd.1) 36 : cluster [DBG] 3.2 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:51.490623+0000 osd.1 (osd.1) 37 : cluster [DBG] 3.2 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:23.190096+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 1622016 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 394550 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:24.190353+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 1622016 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:25.190583+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:54.418866+0000 osd.1 (osd.1) 38 : cluster [DBG] 3.0 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:54.429403+0000 osd.1 (osd.1) 39 : cluster [DBG] 3.0 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 39)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:54.418866+0000 osd.1 (osd.1) 38 : cluster [DBG] 3.0 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:54.429403+0000 osd.1 (osd.1) 39 : cluster [DBG] 3.0 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 1613824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:26.191023+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 1613824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:27.191280+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:56.421932+0000 osd.1 (osd.1) 40 : cluster [DBG] 7.0 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:56.431550+0000 osd.1 (osd.1) 41 : cluster [DBG] 7.0 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 41)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:56.421932+0000 osd.1 (osd.1) 40 : cluster [DBG] 7.0 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:56.431550+0000 osd.1 (osd.1) 41 : cluster [DBG] 7.0 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 1605632 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:28.191585+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:57.394369+0000 osd.1 (osd.1) 42 : cluster [DBG] 3.4 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:57.404935+0000 osd.1 (osd.1) 43 : cluster [DBG] 3.4 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 43)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:57.394369+0000 osd.1 (osd.1) 42 : cluster [DBG] 3.4 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:57.404935+0000 osd.1 (osd.1) 43 : cluster [DBG] 3.4 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 1605632 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 401783 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:29.191881+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 1597440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:30.192106+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:59.356990+0000 osd.1 (osd.1) 44 : cluster [DBG] 7.7 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:59.367685+0000 osd.1 (osd.1) 45 : cluster [DBG] 7.7 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 45)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:59.356990+0000 osd.1 (osd.1) 44 : cluster [DBG] 7.7 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:59.367685+0000 osd.1 (osd.1) 45 : cluster [DBG] 7.7 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 1597440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:31.192449+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 1589248 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:32.192701+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 1589248 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:33.192916+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 1581056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404194 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:34.193116+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 1581056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:35.193387+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 1572864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:36.193623+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.730809212s of 14.968996048s, submitted: 10
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 1572864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:37.193772+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:06.448866+0000 osd.1 (osd.1) 46 : cluster [DBG] 7.d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:06.459234+0000 osd.1 (osd.1) 47 : cluster [DBG] 7.d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 47)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:06.448866+0000 osd.1 (osd.1) 46 : cluster [DBG] 7.d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:06.459234+0000 osd.1 (osd.1) 47 : cluster [DBG] 7.d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 1556480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:38.193987+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:07.450788+0000 osd.1 (osd.1) 48 : cluster [DBG] 3.1c scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:07.461526+0000 osd.1 (osd.1) 49 : cluster [DBG] 3.1c scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 49)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:07.450788+0000 osd.1 (osd.1) 48 : cluster [DBG] 3.1c scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:07.461526+0000 osd.1 (osd.1) 49 : cluster [DBG] 3.1c scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 1548288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 409018 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:39.194260+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 1548288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:40.194448+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 1523712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:41.194620+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:10.439508+0000 osd.1 (osd.1) 50 : cluster [DBG] 7.19 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:10.450060+0000 osd.1 (osd.1) 51 : cluster [DBG] 7.19 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 1523712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 51)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:10.439508+0000 osd.1 (osd.1) 50 : cluster [DBG] 7.19 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:10.450060+0000 osd.1 (osd.1) 51 : cluster [DBG] 7.19 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:42.194877+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 1515520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:43.195102+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 1515520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 411431 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:44.195357+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 1507328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:45.195639+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 1499136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:46.195880+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.676377296s of 10.012275696s, submitted: 7
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 1490944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:47.196188+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:16.375038+0000 osd.1 (osd.1) 52 : cluster [DBG] 4.d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:16.385602+0000 osd.1 (osd.1) 53 : cluster [DBG] 4.d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 1482752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 53)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:16.375038+0000 osd.1 (osd.1) 52 : cluster [DBG] 4.d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:16.385602+0000 osd.1 (osd.1) 53 : cluster [DBG] 4.d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:48.196541+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 1482752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 413842 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:49.196932+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 1474560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:50.197467+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 1466368 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:51.197756+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:20.408859+0000 osd.1 (osd.1) 54 : cluster [DBG] 6.c scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:20.419297+0000 osd.1 (osd.1) 55 : cluster [DBG] 6.c scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 1458176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 55)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:20.408859+0000 osd.1 (osd.1) 54 : cluster [DBG] 6.c scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:20.419297+0000 osd.1 (osd.1) 55 : cluster [DBG] 6.c scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:52.198147+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:21.369087+0000 osd.1 (osd.1) 56 : cluster [DBG] 6.1e scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:21.379691+0000 osd.1 (osd.1) 57 : cluster [DBG] 6.1e scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 1458176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 57)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:21.369087+0000 osd.1 (osd.1) 56 : cluster [DBG] 6.1e scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:21.379691+0000 osd.1 (osd.1) 57 : cluster [DBG] 6.1e scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:53.198684+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:22.366053+0000 osd.1 (osd.1) 58 : cluster [DBG] 6.d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:22.380164+0000 osd.1 (osd.1) 59 : cluster [DBG] 6.d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 1449984 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 423488 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 59)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:22.366053+0000 osd.1 (osd.1) 58 : cluster [DBG] 6.d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:22.380164+0000 osd.1 (osd.1) 59 : cluster [DBG] 6.d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:54.198920+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:23.407331+0000 osd.1 (osd.1) 60 : cluster [DBG] 6.6 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:23.421672+0000 osd.1 (osd.1) 61 : cluster [DBG] 6.6 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 1441792 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 61)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:23.407331+0000 osd.1 (osd.1) 60 : cluster [DBG] 6.6 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:23.421672+0000 osd.1 (osd.1) 61 : cluster [DBG] 6.6 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:55.199244+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:24.414103+0000 osd.1 (osd.1) 62 : cluster [DBG] 4.4 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:24.424684+0000 osd.1 (osd.1) 63 : cluster [DBG] 4.4 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 1433600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 63)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:24.414103+0000 osd.1 (osd.1) 62 : cluster [DBG] 4.4 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:24.424684+0000 osd.1 (osd.1) 63 : cluster [DBG] 4.4 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:56.199480+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:25.378646+0000 osd.1 (osd.1) 64 : cluster [DBG] 6.4 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:25.389016+0000 osd.1 (osd.1) 65 : cluster [DBG] 6.4 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 1433600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 65)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:25.378646+0000 osd.1 (osd.1) 64 : cluster [DBG] 6.4 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:25.389016+0000 osd.1 (osd.1) 65 : cluster [DBG] 6.4 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:57.199719+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:26.341676+0000 osd.1 (osd.1) 66 : cluster [DBG] 6.1 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:26.352225+0000 osd.1 (osd.1) 67 : cluster [DBG] 6.1 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 1425408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 67)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:26.341676+0000 osd.1 (osd.1) 66 : cluster [DBG] 6.1 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:26.352225+0000 osd.1 (osd.1) 67 : cluster [DBG] 6.1 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:58.200020+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 1425408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430721 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:59.200249+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 1417216 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:00.200544+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 1409024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:01.200689+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 1400832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:02.200850+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 1400832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:03.201052+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.769033432s of 16.953893661s, submitted: 15
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 1400832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 433132 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:04.201251+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:33.415251+0000 osd.1 (osd.1) 68 : cluster [DBG] 4.7 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:33.425703+0000 osd.1 (osd.1) 69 : cluster [DBG] 4.7 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 69)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:33.415251+0000 osd.1 (osd.1) 68 : cluster [DBG] 4.7 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:33.425703+0000 osd.1 (osd.1) 69 : cluster [DBG] 4.7 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 1384448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:05.201613+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 1384448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:06.201849+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 1376256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:07.202030+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 1376256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:08.202281+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 1368064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 433132 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:09.202473+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 1351680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:10.202692+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:39.524338+0000 osd.1 (osd.1) 70 : cluster [DBG] 4.5 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:39.534814+0000 osd.1 (osd.1) 71 : cluster [DBG] 4.5 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 71)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:39.524338+0000 osd.1 (osd.1) 70 : cluster [DBG] 4.5 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:39.534814+0000 osd.1 (osd.1) 71 : cluster [DBG] 4.5 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 1343488 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:11.203076+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 1335296 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:12.203365+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 1335296 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:13.203514+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.903487206s of 10.152817726s, submitted: 4
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 1327104 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 437954 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:14.203737+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:43.568065+0000 osd.1 (osd.1) 72 : cluster [DBG] 6.e scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:43.582292+0000 osd.1 (osd.1) 73 : cluster [DBG] 6.e scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 73)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:43.568065+0000 osd.1 (osd.1) 72 : cluster [DBG] 6.e scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:43.582292+0000 osd.1 (osd.1) 73 : cluster [DBG] 6.e scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 1310720 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:15.204031+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 1302528 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:16.204207+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 1294336 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:17.204375+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:46.605097+0000 osd.1 (osd.1) 74 : cluster [DBG] 4.f scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:46.615788+0000 osd.1 (osd.1) 75 : cluster [DBG] 4.f scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 75)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:46.605097+0000 osd.1 (osd.1) 74 : cluster [DBG] 4.f scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:46.615788+0000 osd.1 (osd.1) 75 : cluster [DBG] 4.f scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 1294336 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:18.204661+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 1253376 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 442776 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:19.204970+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:48.686742+0000 osd.1 (osd.1) 76 : cluster [DBG] 6.b scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:48.700922+0000 osd.1 (osd.1) 77 : cluster [DBG] 6.b scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 1245184 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 77)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:48.686742+0000 osd.1 (osd.1) 76 : cluster [DBG] 6.b scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:48.700922+0000 osd.1 (osd.1) 77 : cluster [DBG] 6.b scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:20.205290+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 1236992 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:21.205507+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 1228800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:22.205717+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:51.678881+0000 osd.1 (osd.1) 78 : cluster [DBG] 4.2 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:51.689495+0000 osd.1 (osd.1) 79 : cluster [DBG] 4.2 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 1228800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 79)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:51.678881+0000 osd.1 (osd.1) 78 : cluster [DBG] 4.2 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:51.689495+0000 osd.1 (osd.1) 79 : cluster [DBG] 4.2 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:23.206076+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 1220608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 445187 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:24.206267+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.041762352s of 11.141496658s, submitted: 8
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 1204224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:25.206469+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:54.709627+0000 osd.1 (osd.1) 80 : cluster [DBG] 4.9 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:54.720249+0000 osd.1 (osd.1) 81 : cluster [DBG] 4.9 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 1204224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 81)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:54.709627+0000 osd.1 (osd.1) 80 : cluster [DBG] 4.9 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:54.720249+0000 osd.1 (osd.1) 81 : cluster [DBG] 4.9 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:26.206727+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 1204224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:27.206907+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 1204224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:28.209211+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 1196032 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 447598 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:29.209343+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 1196032 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:30.209477+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 1187840 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:31.210759+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 1187840 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:32.211219+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 1179648 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:33.213190+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:02.688453+0000 osd.1 (osd.1) 82 : cluster [DBG] 4.8 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:02.699130+0000 osd.1 (osd.1) 83 : cluster [DBG] 4.8 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 83)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:02.688453+0000 osd.1 (osd.1) 82 : cluster [DBG] 4.8 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:02.699130+0000 osd.1 (osd.1) 83 : cluster [DBG] 4.8 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 450009 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 1171456 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:34.218711+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 1171456 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:35.220436+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.987721443s of 10.996203423s, submitted: 4
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 1163264 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:36.220787+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:05.705804+0000 osd.1 (osd.1) 84 : cluster [DBG] 6.17 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:05.716373+0000 osd.1 (osd.1) 85 : cluster [DBG] 6.17 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 85)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:05.705804+0000 osd.1 (osd.1) 84 : cluster [DBG] 6.17 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:05.716373+0000 osd.1 (osd.1) 85 : cluster [DBG] 6.17 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 1155072 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:37.221178+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1146880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:38.221430+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 452422 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1146880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:39.221576+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1146880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:40.221852+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 1138688 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:41.222067+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:10.667645+0000 osd.1 (osd.1) 86 : cluster [DBG] 4.12 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:10.678206+0000 osd.1 (osd.1) 87 : cluster [DBG] 4.12 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 87)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:10.667645+0000 osd.1 (osd.1) 86 : cluster [DBG] 4.12 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:10.678206+0000 osd.1 (osd.1) 87 : cluster [DBG] 4.12 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 1138688 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:42.223089+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 1130496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:43.223285+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454835 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 1130496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:44.223461+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 1130496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:45.223646+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.999300957s of 10.010235786s, submitted: 4
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 1114112 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:46.223806+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:15.716032+0000 osd.1 (osd.1) 88 : cluster [DBG] 6.2 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:15.726555+0000 osd.1 (osd.1) 89 : cluster [DBG] 6.2 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 1114112 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 89)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:15.716032+0000 osd.1 (osd.1) 88 : cluster [DBG] 6.2 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:15.726555+0000 osd.1 (osd.1) 89 : cluster [DBG] 6.2 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:47.223991+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 1105920 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:48.225270+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:17.771947+0000 osd.1 (osd.1) 90 : cluster [DBG] 4.14 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:17.782456+0000 osd.1 (osd.1) 91 : cluster [DBG] 4.14 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 462072 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 1097728 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 91)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:17.771947+0000 osd.1 (osd.1) 90 : cluster [DBG] 4.14 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:17.782456+0000 osd.1 (osd.1) 91 : cluster [DBG] 4.14 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:49.225473+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:18.761646+0000 osd.1 (osd.1) 92 : cluster [DBG] 6.1c scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:18.775734+0000 osd.1 (osd.1) 93 : cluster [DBG] 6.1c scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 1089536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 93)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:18.761646+0000 osd.1 (osd.1) 92 : cluster [DBG] 6.1c scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:18.775734+0000 osd.1 (osd.1) 93 : cluster [DBG] 6.1c scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:50.225696+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:19.802211+0000 osd.1 (osd.1) 94 : cluster [DBG] 2.1b scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:19.812842+0000 osd.1 (osd.1) 95 : cluster [DBG] 2.1b scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 1089536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 95)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:19.802211+0000 osd.1 (osd.1) 94 : cluster [DBG] 2.1b scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:19.812842+0000 osd.1 (osd.1) 95 : cluster [DBG] 2.1b scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:51.225966+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:20.833098+0000 osd.1 (osd.1) 96 : cluster [DBG] 5.13 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:20.843600+0000 osd.1 (osd.1) 97 : cluster [DBG] 5.13 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1146880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 97)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:20.833098+0000 osd.1 (osd.1) 96 : cluster [DBG] 5.13 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:20.843600+0000 osd.1 (osd.1) 97 : cluster [DBG] 5.13 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:52.226247+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:21.786475+0000 osd.1 (osd.1) 98 : cluster [DBG] 5.12 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:21.797066+0000 osd.1 (osd.1) 99 : cluster [DBG] 5.12 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 1138688 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 99)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:21.786475+0000 osd.1 (osd.1) 98 : cluster [DBG] 5.12 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:21.797066+0000 osd.1 (osd.1) 99 : cluster [DBG] 5.12 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:53.226483+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:22.836857+0000 osd.1 (osd.1) 100 : cluster [DBG] 2.15 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:22.847295+0000 osd.1 (osd.1) 101 : cluster [DBG] 2.15 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 474135 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 1114112 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 101)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:22.836857+0000 osd.1 (osd.1) 100 : cluster [DBG] 2.15 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:22.847295+0000 osd.1 (osd.1) 101 : cluster [DBG] 2.15 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:54.226790+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:23.816961+0000 osd.1 (osd.1) 102 : cluster [DBG] 5.9 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:23.827507+0000 osd.1 (osd.1) 103 : cluster [DBG] 5.9 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 1089536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 103)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:23.816961+0000 osd.1 (osd.1) 102 : cluster [DBG] 5.9 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:23.827507+0000 osd.1 (osd.1) 103 : cluster [DBG] 5.9 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:55.227008+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:24.775991+0000 osd.1 (osd.1) 104 : cluster [DBG] 5.11 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:24.786408+0000 osd.1 (osd.1) 105 : cluster [DBG] 5.11 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 1089536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 105)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:24.775991+0000 osd.1 (osd.1) 104 : cluster [DBG] 5.11 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:24.786408+0000 osd.1 (osd.1) 105 : cluster [DBG] 5.11 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:56.227270+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.001464844s of 11.047466278s, submitted: 18
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 1081344 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:57.227438+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:26.763623+0000 osd.1 (osd.1) 106 : cluster [DBG] 2.a scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:26.774202+0000 osd.1 (osd.1) 107 : cluster [DBG] 2.a scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 107)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:26.763623+0000 osd.1 (osd.1) 106 : cluster [DBG] 2.a scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:26.774202+0000 osd.1 (osd.1) 107 : cluster [DBG] 2.a scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 1073152 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:58.227640+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 478959 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 1073152 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:59.227779+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 1048576 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:00.227945+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 1048576 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:01.228224+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:02.228795+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:03.229289+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 478959 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:04.229465+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:05.229797+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:06.230258+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:07.230795+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:08.231786+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 478959 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:09.231954+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:10.232207+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.007675171s of 14.012120247s, submitted: 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:11.232840+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:40.775772+0000 osd.1 (osd.1) 108 : cluster [DBG] 5.16 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:40.786291+0000 osd.1 (osd.1) 109 : cluster [DBG] 5.16 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 109)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:40.775772+0000 osd.1 (osd.1) 108 : cluster [DBG] 5.16 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:40.786291+0000 osd.1 (osd.1) 109 : cluster [DBG] 5.16 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:12.233531+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:13.233854+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:14.234231+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 481372 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:15.234391+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:44.744417+0000 osd.1 (osd.1) 110 : cluster [DBG] 2.d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:44.754904+0000 osd.1 (osd.1) 111 : cluster [DBG] 2.d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 111)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:44.744417+0000 osd.1 (osd.1) 110 : cluster [DBG] 2.d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:44.754904+0000 osd.1 (osd.1) 111 : cluster [DBG] 2.d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:16.234740+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:17.234902+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:18.235251+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:19.235596+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:48.714252+0000 osd.1 (osd.1) 112 : cluster [DBG] 2.5 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:48.724803+0000 osd.1 (osd.1) 113 : cluster [DBG] 2.5 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 486194 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 113)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:48.714252+0000 osd.1 (osd.1) 112 : cluster [DBG] 2.5 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:48.724803+0000 osd.1 (osd.1) 113 : cluster [DBG] 2.5 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:20.235874+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:49.751737+0000 osd.1 (osd.1) 114 : cluster [DBG] 2.4 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:49.762265+0000 osd.1 (osd.1) 115 : cluster [DBG] 2.4 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 115)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:49.751737+0000 osd.1 (osd.1) 114 : cluster [DBG] 2.4 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:49.762265+0000 osd.1 (osd.1) 115 : cluster [DBG] 2.4 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:21.236180+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.041901588s of 11.056705475s, submitted: 8
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:22.236584+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:51.831978+0000 osd.1 (osd.1) 116 : cluster [DBG] 2.17 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:51.842558+0000 osd.1 (osd.1) 117 : cluster [DBG] 2.17 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 117)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:51.831978+0000 osd.1 (osd.1) 116 : cluster [DBG] 2.17 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:51.842558+0000 osd.1 (osd.1) 117 : cluster [DBG] 2.17 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:23.236817+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:24.237009+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:53.859446+0000 osd.1 (osd.1) 118 : cluster [DBG] 2.7 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:53.869980+0000 osd.1 (osd.1) 119 : cluster [DBG] 2.7 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 493429 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 119)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:53.859446+0000 osd.1 (osd.1) 118 : cluster [DBG] 2.7 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:53.869980+0000 osd.1 (osd.1) 119 : cluster [DBG] 2.7 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:25.237236+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:54.843250+0000 osd.1 (osd.1) 120 : cluster [DBG] 4.10 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:54.853763+0000 osd.1 (osd.1) 121 : cluster [DBG] 4.10 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 121)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:54.843250+0000 osd.1 (osd.1) 120 : cluster [DBG] 4.10 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:54.853763+0000 osd.1 (osd.1) 121 : cluster [DBG] 4.10 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:26.237434+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:27.237699+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:56.879766+0000 osd.1 (osd.1) 122 : cluster [DBG] 6.1d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:56.893939+0000 osd.1 (osd.1) 123 : cluster [DBG] 6.1d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:28.238016+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 123)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:56.879766+0000 osd.1 (osd.1) 122 : cluster [DBG] 6.1d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:56.893939+0000 osd.1 (osd.1) 123 : cluster [DBG] 6.1d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:29.238185+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 498255 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:30.238531+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:31.238993+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.050240517s of 10.068576813s, submitted: 8
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:32.239718+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:01.900540+0000 osd.1 (osd.1) 124 : cluster [DBG] 2.6 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:01.911116+0000 osd.1 (osd.1) 125 : cluster [DBG] 2.6 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 125)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:01.900540+0000 osd.1 (osd.1) 124 : cluster [DBG] 2.6 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:01.911116+0000 osd.1 (osd.1) 125 : cluster [DBG] 2.6 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:33.240610+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:34.241106+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 500666 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:35.241433+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:36.241756+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:37.241919+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:38.242414+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:39.242581+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:08.918672+0000 osd.1 (osd.1) 126 : cluster [DBG] 5.1 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:08.929248+0000 osd.1 (osd.1) 127 : cluster [DBG] 5.1 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 503077 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 127)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:08.918672+0000 osd.1 (osd.1) 126 : cluster [DBG] 5.1 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:08.929248+0000 osd.1 (osd.1) 127 : cluster [DBG] 5.1 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:40.243331+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:41.243859+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:42.244281+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:43.244695+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.022533417s of 12.031448364s, submitted: 4
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:44.244931+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:13.932561+0000 osd.1 (osd.1) 128 : cluster [DBG] 2.9 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:13.943103+0000 osd.1 (osd.1) 129 : cluster [DBG] 2.9 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 505488 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 129)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:13.932561+0000 osd.1 (osd.1) 128 : cluster [DBG] 2.9 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:13.943103+0000 osd.1 (osd.1) 129 : cluster [DBG] 2.9 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:45.245268+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:46.245461+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:47.245724+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:16.831181+0000 osd.1 (osd.1) 130 : cluster [DBG] 5.f scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:16.841889+0000 osd.1 (osd.1) 131 : cluster [DBG] 5.f scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 131)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:16.831181+0000 osd.1 (osd.1) 130 : cluster [DBG] 5.f scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:16.841889+0000 osd.1 (osd.1) 131 : cluster [DBG] 5.f scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:48.246343+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:49.246534+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 507899 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:50.246722+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:51.247203+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:52.247433+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:21.891339+0000 osd.1 (osd.1) 132 : cluster [DBG] 5.1a scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:21.901919+0000 osd.1 (osd.1) 133 : cluster [DBG] 5.1a scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 133)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:21.891339+0000 osd.1 (osd.1) 132 : cluster [DBG] 5.1a scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:21.901919+0000 osd.1 (osd.1) 133 : cluster [DBG] 5.1a scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:53.247691+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:54.247890+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 510312 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.010965347s of 11.032461166s, submitted: 6
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:55.248100+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:24.965100+0000 osd.1 (osd.1) 134 : cluster [DBG] 2.3 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:24.975677+0000 osd.1 (osd.1) 135 : cluster [DBG] 2.3 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 135)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:24.965100+0000 osd.1 (osd.1) 134 : cluster [DBG] 2.3 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:24.975677+0000 osd.1 (osd.1) 135 : cluster [DBG] 2.3 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:56.248365+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:57.248550+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:26.996725+0000 osd.1 (osd.1) 136 : cluster [DBG] 5.1d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:27.007250+0000 osd.1 (osd.1) 137 : cluster [DBG] 5.1d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 137)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:26.996725+0000 osd.1 (osd.1) 136 : cluster [DBG] 5.1d scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:27.007250+0000 osd.1 (osd.1) 137 : cluster [DBG] 5.1d scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:58.249018+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:59.249235+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 515136 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:00.249398+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:01.249664+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:31.013697+0000 osd.1 (osd.1) 138 : cluster [DBG] 5.19 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:31.024318+0000 osd.1 (osd.1) 139 : cluster [DBG] 5.19 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 139)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:31.013697+0000 osd.1 (osd.1) 138 : cluster [DBG] 5.19 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:31.024318+0000 osd.1 (osd.1) 139 : cluster [DBG] 5.19 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:02.250055+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:32.043399+0000 osd.1 (osd.1) 140 : cluster [DBG] 5.c scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:32.053979+0000 osd.1 (osd.1) 141 : cluster [DBG] 5.c scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 141)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:32.043399+0000 osd.1 (osd.1) 140 : cluster [DBG] 5.c scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:32.053979+0000 osd.1 (osd.1) 141 : cluster [DBG] 5.c scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:03.250362+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:33.062053+0000 osd.1 (osd.1) 142 : cluster [DBG] 5.18 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:33.072649+0000 osd.1 (osd.1) 143 : cluster [DBG] 5.18 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 143)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:33.062053+0000 osd.1 (osd.1) 142 : cluster [DBG] 5.18 scrub starts
Jan 29 09:35:45 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:33.072649+0000 osd.1 (osd.1) 143 : cluster [DBG] 5.18 scrub ok
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:04.250632+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:05.250791+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 671744 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:06.250965+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 671744 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:07.251155+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 671744 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:08.251325+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 663552 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:09.251740+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 663552 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:10.252067+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 663552 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:11.252434+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64348160 unmapped: 655360 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:12.252906+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64348160 unmapped: 655360 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:13.253092+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 647168 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:14.253476+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 647168 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:15.253728+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 638976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:16.253919+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 638976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:17.254272+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 630784 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:18.254604+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 630784 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:19.254886+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 630784 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:20.255176+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:21.255506+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:22.255890+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64397312 unmapped: 606208 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:23.256285+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64397312 unmapped: 606208 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:24.256542+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64397312 unmapped: 606208 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:25.256763+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 598016 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:26.256948+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:27.257166+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64421888 unmapped: 581632 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:28.257335+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64421888 unmapped: 581632 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:29.257558+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:30.257813+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:31.258034+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:32.258276+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64438272 unmapped: 565248 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:33.258425+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64438272 unmapped: 565248 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:34.258667+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 557056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:35.258823+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 557056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:36.259104+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 548864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:37.259380+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 540672 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:38.259598+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 540672 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:39.259738+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 532480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:40.259866+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 532480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:41.260750+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 524288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:42.260986+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 524288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:43.261151+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:44.261376+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:45.261535+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:46.261743+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:47.262051+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:48.262262+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:49.262472+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:50.262699+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:51.262898+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:52.263174+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:53.263395+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 475136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:54.263598+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 475136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:55.263758+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 475136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:56.263927+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:57.264183+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:58.264340+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:59.264497+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:00.264870+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:01.265055+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 450560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:02.265202+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 450560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:03.265351+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 442368 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:04.265482+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 442368 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:05.265614+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:06.265885+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:07.266067+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:08.266243+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 425984 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:09.266384+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 425984 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:10.266590+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 417792 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:11.266727+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 417792 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:12.266910+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:13.267055+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:14.267201+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:15.267318+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 401408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:16.267461+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 401408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:17.267635+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 393216 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:18.267800+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 393216 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:19.267993+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:20.268239+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:21.268456+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:22.268720+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:23.268958+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:24.269119+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:25.269394+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 368640 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:26.269560+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 368640 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:27.269756+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:28.270019+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:29.270236+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 352256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:30.270405+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 352256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:31.270612+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 352256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:32.270906+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:33.271106+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:34.271360+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:35.271624+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:36.271881+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:37.272094+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:38.272317+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:39.272656+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 294912 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:40.272940+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 294912 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:41.273205+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 286720 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:42.273614+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 286720 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:43.273924+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 278528 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:44.274405+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 278528 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:45.274664+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 278528 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:46.274952+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 270336 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:47.275249+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 270336 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:48.275478+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 270336 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:49.275750+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 262144 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:50.276033+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 262144 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:51.276326+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: no keepalive since 2026-01-29T09:19:21.276400+0000 (2106-02-07T06:28:15.999913+0000 seconds), reconnecting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _reopen_session rank -1
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _add_conns ranks=[0]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient(hunting): picked mon.compute-0 con 0x5579fea9fc00 addr [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient(hunting): start opening mon connection
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient(hunting): _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient(hunting): get_auth_request con 0x5579fea9fc00 auth_method 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient(hunting): get_auth_request method 2 preferred_modes [2,1]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient(hunting): _init_auth method 2
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient(hunting): _init_auth already have auth, reseting
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient(hunting): handle_auth_reply_more payload 9
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient(hunting): handle_auth_reply_more payload_len 9
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient(hunting): handle_auth_reply_more responding with 132 bytes
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient(hunting): handle_auth_done global_id 14197 payload 293
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _finish_hunting 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: found mon.compute-0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _finish_auth 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:51.277778+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_monmap mon_map magic: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient:  got monmap 1 from mon.compute-0 (according to old e1)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: dump:
                                           epoch 1
                                           fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437
                                           last_changed 2026-01-29T09:11:34.210489+0000
                                           created 2026-01-29T09:11:34.210489+0000
                                           min_mon_release 20 (tentacle)
                                           election_strategy: 1
                                           0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_config config(9 keys)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: set_mon_vals no callback set
Jan 29 09:35:45 compute-0 ceph-osd[87035]: mgrc handle_mgr_map Got map version 9
Jan 29 09:35:45 compute-0 ceph-osd[87035]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1795618739,v1:192.168.122.100:6801/1795618739]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 253952 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 253952 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 253952 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:55.777397+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:56.777634+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:57.777874+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:58.778066+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:59.778282+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 155648 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:00.778479+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 155648 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:01.778679+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 147456 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:02.779187+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 147456 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:03.779407+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 139264 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:04.779606+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 139264 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:05.779762+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 139264 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:06.779970+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 131072 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:07.780326+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 131072 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:08.780517+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 131072 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:09.780741+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 122880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:10.780904+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 122880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:11.781112+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 114688 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:12.781580+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 114688 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:13.781752+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 106496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:14.781965+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 106496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:15.782183+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 106496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:16.782365+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64905216 unmapped: 98304 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:17.782586+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64905216 unmapped: 98304 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:18.782855+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64913408 unmapped: 90112 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:19.783041+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64913408 unmapped: 90112 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:20.783360+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64913408 unmapped: 90112 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:21.783623+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 81920 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:22.783936+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 81920 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:23.784299+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 73728 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:24.784648+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 73728 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:25.784988+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 73728 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:26.785274+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64937984 unmapped: 65536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:27.785561+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64937984 unmapped: 65536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:28.785769+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64937984 unmapped: 65536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:29.785966+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64946176 unmapped: 57344 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:30.786300+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64946176 unmapped: 57344 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:31.786582+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64954368 unmapped: 49152 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:32.786941+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64954368 unmapped: 49152 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:33.787215+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 40960 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:34.787471+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 40960 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:35.787722+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 40960 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:36.787912+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 32768 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:37.788051+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 32768 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:38.788291+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64978944 unmapped: 24576 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:39.788470+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64978944 unmapped: 24576 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:40.788668+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64978944 unmapped: 24576 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:41.788827+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 16384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:42.789073+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 16384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:43.789300+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 8192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:44.789478+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 8192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:45.789623+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 0 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:46.789839+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 0 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:47.790078+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65011712 unmapped: 1040384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:48.790319+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65011712 unmapped: 1040384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:49.790586+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65011712 unmapped: 1040384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:50.790750+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 1032192 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:51.790963+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 1032192 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:52.791210+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 1024000 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:53.791345+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 1024000 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:54.791489+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:55.791642+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:56.791876+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:57.792053+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:58.792324+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:59.792568+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:00.792854+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65052672 unmapped: 999424 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:01.793225+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65052672 unmapped: 999424 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:02.793787+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 991232 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:03.794057+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 991232 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:04.794301+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 983040 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:05.794513+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 983040 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:06.794717+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 983040 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:07.794876+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 974848 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:08.795089+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 974848 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:09.795301+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 966656 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:10.795479+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 966656 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:11.795699+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 966656 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:12.795913+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 958464 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:13.796073+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 958464 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:14.796278+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:15.796487+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:16.796930+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:17.797099+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65110016 unmapped: 942080 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:18.797222+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65110016 unmapped: 942080 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:19.797382+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:20.797593+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:21.797785+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 925696 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:22.797957+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 925696 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:23.798103+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 925696 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:24.798290+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:25.798430+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:26.798654+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:27.798806+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:28.799355+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:29.799546+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:30.799787+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:31.799980+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:32.800206+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:33.800425+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:34.800631+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:35.800904+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:36.801179+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:37.801371+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:38.801594+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:39.801892+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:40.802126+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:41.802380+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:42.802785+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:43.803019+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:44.803278+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:45.803525+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:46.803732+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:47.803937+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:48.837676+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:49.837951+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:50.838180+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:51.838373+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:52.838818+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:53.839027+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:54.839245+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:55.839419+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:56.839653+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:57.839905+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:58.840085+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:59.840236+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:00.840473+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:01.840705+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:02.841055+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:03.841227+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:04.841377+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:05.841528+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:06.841676+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:07.841877+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:08.842064+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:09.842256+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:10.842433+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:11.842648+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:12.843557+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:13.843737+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:14.843866+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:15.844014+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:16.844189+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:17.844379+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:18.844605+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:19.844825+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:20.845009+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:21.845200+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:22.845447+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65331200 unmapped: 720896 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:23.845639+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65331200 unmapped: 720896 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:24.845788+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65339392 unmapped: 712704 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:25.845931+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65339392 unmapped: 712704 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:26.846111+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65339392 unmapped: 712704 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:27.846313+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65347584 unmapped: 704512 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:28.846495+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65347584 unmapped: 704512 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:29.846688+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:30.846901+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:31.847124+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:32.847382+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:33.847536+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:34.847678+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:35.847826+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:36.848104+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65380352 unmapped: 671744 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:37.848339+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65380352 unmapped: 671744 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:38.848501+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65380352 unmapped: 671744 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:39.848642+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 663552 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:40.848783+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 663552 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 29 09:35:45 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1919191723' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:41.848984+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 663552 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:42.849216+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65396736 unmapped: 655360 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:43.849407+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65396736 unmapped: 655360 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:44.849580+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 647168 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:45.849796+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 647168 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:46.849995+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 647168 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:47.850160+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65413120 unmapped: 638976 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:48.850320+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65413120 unmapped: 638976 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:49.850441+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65421312 unmapped: 630784 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:50.850600+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65421312 unmapped: 630784 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:51.850763+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65429504 unmapped: 622592 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:52.850947+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65429504 unmapped: 622592 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:53.851080+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65437696 unmapped: 614400 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:54.851571+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65437696 unmapped: 614400 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:55.851727+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65437696 unmapped: 614400 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:56.851918+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65445888 unmapped: 606208 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:57.852082+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65445888 unmapped: 606208 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:58.852216+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65445888 unmapped: 606208 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:59.852356+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65454080 unmapped: 598016 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:00.852507+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 589824 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:01.852672+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 589824 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:02.852914+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 589824 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:03.853086+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65470464 unmapped: 581632 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:04.853245+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65470464 unmapped: 581632 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:05.853575+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65470464 unmapped: 581632 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:06.853872+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65478656 unmapped: 573440 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:07.854223+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65478656 unmapped: 573440 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:08.854381+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 565248 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:09.854531+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 565248 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:10.854718+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 565248 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:11.855087+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65495040 unmapped: 557056 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:12.855528+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65495040 unmapped: 557056 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:13.855865+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65495040 unmapped: 557056 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:14.856259+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65503232 unmapped: 548864 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:15.856685+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65503232 unmapped: 548864 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:16.856875+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65511424 unmapped: 540672 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:17.857093+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65511424 unmapped: 540672 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:18.857400+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65519616 unmapped: 532480 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:19.857589+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65519616 unmapped: 532480 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:20.857848+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65519616 unmapped: 532480 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:21.858121+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 524288 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:22.858372+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 524288 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:23.858607+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65536000 unmapped: 516096 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:24.858775+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65536000 unmapped: 516096 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:25.858923+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65536000 unmapped: 516096 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:26.859083+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65544192 unmapped: 507904 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:27.859287+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Cumulative writes: 4322 writes, 19K keys, 4322 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4322 writes, 406 syncs, 10.65 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4322 writes, 19K keys, 4322 commit groups, 1.0 writes per commit group, ingest: 16.03 MB, 0.03 MB/s
                                           Interval WAL: 4322 writes, 406 syncs, 10.65 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 425984 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:28.859476+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 425984 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:29.859695+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 417792 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:30.859864+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 417792 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:31.860032+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 417792 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:32.860243+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:33.860434+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 409600 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:34.860584+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 409600 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:35.860729+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 401408 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:36.860887+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 401408 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:37.861035+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 401408 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:38.861239+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 393216 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:39.861381+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 393216 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:40.861504+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 385024 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:41.861727+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 385024 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:42.862042+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 385024 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:43.862338+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 376832 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:44.862526+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 385024 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:45.862745+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 376832 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:46.863053+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 376832 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:47.863861+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 376832 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:48.864096+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 368640 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:49.864345+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 368640 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:50.864536+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 360448 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:51.864791+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 360448 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:52.865057+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 360448 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:53.865292+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65699840 unmapped: 352256 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:54.865479+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65699840 unmapped: 352256 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:55.865658+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 344064 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:56.865879+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 344064 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:57.866089+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 344064 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:58.866226+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 335872 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:59.866416+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 335872 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:00.866579+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 327680 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:01.866788+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 327680 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:02.867029+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 319488 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:03.867215+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 319488 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:04.867408+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 319488 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:05.867606+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 311296 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:06.867749+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 311296 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:07.867918+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 303104 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:08.868084+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 303104 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:09.868277+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 303104 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:10.868447+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 303104 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:11.868624+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 294912 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:12.868873+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 294912 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:13.869018+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 286720 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:14.869160+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 286720 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:15.869340+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 278528 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:16.869466+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 278528 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:17.869629+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 278528 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:18.869783+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 270336 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:19.869982+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 270336 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:20.870156+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 262144 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:21.870315+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 262144 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:22.870501+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 262144 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:23.870628+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65798144 unmapped: 253952 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:24.870758+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65798144 unmapped: 253952 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:25.870898+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 245760 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:26.871043+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 245760 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:27.871197+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 245760 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:28.871324+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 237568 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:29.871516+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 237568 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:30.871660+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 237568 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:31.871852+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 229376 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:32.872099+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 229376 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:33.872223+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 229376 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:34.872409+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 221184 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:35.872565+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 221184 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:36.872756+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 212992 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:37.872905+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 212992 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:38.873187+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 212992 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:39.873346+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 204800 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:40.873480+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 204800 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:41.873604+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 196608 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:42.873785+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 196608 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:43.873945+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 196608 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:44.874081+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 188416 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:45.874187+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 188416 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:46.874332+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 180224 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:47.874530+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 180224 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:48.874662+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 172032 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:49.874801+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 172032 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:50.874940+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 172032 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:51.875070+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 163840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:52.875256+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 163840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:53.875390+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 163840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:54.875520+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 155648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:55.875661+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 155648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:56.875819+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 155648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:57.875984+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 147456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:58.876199+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 147456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:59.876500+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 139264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:00.876705+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 139264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:01.876853+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 131072 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:02.877083+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 131072 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:03.877346+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 122880 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:04.877488+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 122880 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:05.877743+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 122880 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:06.877970+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 114688 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:07.878199+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 114688 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:08.878361+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 106496 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:09.878571+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 106496 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:10.878727+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 106496 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:11.878937+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 98304 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:12.879212+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 98304 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:13.879367+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 98304 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:14.879598+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 90112 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:15.879773+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 90112 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:16.879926+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 81920 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:17.880064+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 81920 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:18.880298+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 73728 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:19.880530+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 73728 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:20.880686+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 73728 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:21.880857+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 65536 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:22.881070+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 65536 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:23.881226+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 57344 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:24.881460+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 57344 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:25.881641+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 57344 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:26.881818+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 57344 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:27.881989+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 57344 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:28.882225+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:29.882413+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:30.882615+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:31.882802+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:32.882992+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:33.883236+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:34.883381+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:35.883583+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:36.883718+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:37.883838+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:38.884608+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:39.884738+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:40.884957+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:41.885168+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:42.885518+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:43.885719+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:44.885935+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:45.886144+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:46.886370+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:48.578355+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:49.578551+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:50.578718+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:51.578855+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:52.579053+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:53.579295+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:54.579460+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:55.579664+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:56.579841+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:57.579998+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:58.580223+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:59.580437+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:00.580643+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:01.580829+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:02.581051+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:03.581323+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:04.581528+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:05.581721+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:06.581965+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:07.582175+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:08.582363+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:09.582578+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:10.582996+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:11.583230+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:12.583394+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:13.583630+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:14.583808+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:15.583973+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:16.584115+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:17.584314+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:18.584453+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:19.584632+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:20.584814+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:21.584989+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:22.585125+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:23.585313+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:24.585455+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:25.585600+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:26.585783+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:27.585977+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:28.586092+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:29.586237+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:30.586374+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:31.586536+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:32.586706+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:33.586884+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:34.587062+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:35.587208+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:36.587351+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:37.587510+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:38.588020+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:39.588205+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:40.588495+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:41.588748+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:42.588963+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:43.589329+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:44.589553+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:45.589713+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:46.589910+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:47.590688+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:48.590882+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:49.591015+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:50.591191+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:51.591489+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:52.591626+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:53.591882+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:54.592045+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:55.592194+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:56.592347+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:57.592463+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:58.593059+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:59.593269+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:00.593403+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:01.593542+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:02.593682+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:03.593902+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:04.594058+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:05.594202+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:06.594382+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:07.594553+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:08.594704+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:09.594851+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:10.595028+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:11.595203+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:12.595397+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:13.595597+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:14.595745+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:15.595907+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:16.596076+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:17.596208+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:18.596398+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:19.596601+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:20.596846+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:21.597099+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:22.597260+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:23.597519+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:24.597711+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:25.597867+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:26.598049+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:27.598185+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:28.598359+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:29.598550+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:30.598722+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:31.598871+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:32.599032+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:33.599189+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:34.599338+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:35.599497+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:36.599648+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:37.599787+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:38.599984+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:39.600200+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:40.600371+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:41.600527+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:42.600700+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:43.600923+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:44.601187+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:45.601308+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:46.601439+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:47.601572+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:48.601817+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:49.601937+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:50.602213+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:51.602400+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:52.602609+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:53.602924+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:54.603185+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:55.603461+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:56.603629+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:57.603796+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:58.603971+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:59.604162+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:00.604489+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:01.604632+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:02.604799+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:03.605001+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:04.605166+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:05.605344+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:06.605544+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:07.605716+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:08.605866+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:09.605997+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:10.606192+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:11.606328+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:12.606483+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:13.606650+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:14.606804+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:15.606989+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:16.608604+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:17.608785+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:18.609011+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:19.609272+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:20.609509+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:21.609689+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:22.610290+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:23.610709+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:24.610920+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:25.611253+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:26.611564+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:27.611709+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:28.612063+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:29.612230+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:30.612414+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:31.612556+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:32.612724+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:33.613033+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:34.613224+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:35.613379+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:36.613504+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:37.613631+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:38.613758+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:39.614049+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:40.614824+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:41.614984+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:42.615168+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:43.615377+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: mgrc ms_handle_reset ms_handle_reset con 0x5579fcf14000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1795618739
Jan 29 09:35:45 compute-0 ceph-osd[87035]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1795618739,v1:192.168.122.100:6801/1795618739]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: get_auth_request con 0x5579fd00c400 auth_method 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: mgrc handle_mgr_configure stats_period=5
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 827392 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:44.615499+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 ms_handle_reset con 0x5579fd8ccc00 session 0x5579fd0b7340
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fc7b2400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:45.615648+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 ms_handle_reset con 0x5579fd8cd000 session 0x5579fce748c0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd902400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:46.615766+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:47.615872+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:48.616021+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:49.616161+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:50.616305+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:51.616454+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:52.616600+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:53.616767+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:54.616922+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:55.617072+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:56.617214+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:57.617391+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:58.617557+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:59.617689+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:00.617871+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:01.618023+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:02.618212+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:03.618888+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:04.619626+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:05.621237+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:06.621365+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:07.621510+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:08.621620+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:09.621743+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:10.621875+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:11.622092+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:12.622199+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:13.622370+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:14.622503+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:15.622621+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:16.622927+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:17.623056+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:18.623180+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:19.623398+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:20.623520+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:21.623668+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:22.623826+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:23.623993+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:24.624124+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:25.624328+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:26.624463+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:27.624597+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:28.624728+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:29.624851+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:30.624960+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:31.625098+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:32.625222+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:33.625358+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:34.625549+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:35.625712+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:36.625868+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:37.626029+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:38.626192+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:39.626441+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:40.626574+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:41.626692+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:42.626824+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:43.626992+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:44.627104+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:45.627243+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:46.627454+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:47.627647+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:48.627817+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:49.627943+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:50.628074+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:51.628263+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:52.628430+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:53.628634+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:54.628772+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:55.628914+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:56.629059+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:57.629219+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:58.629393+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:59.629539+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:00.629702+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:01.629879+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:02.630077+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:03.630286+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:04.630419+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:05.630580+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:06.630704+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:07.630836+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:08.631005+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:09.631161+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:10.631335+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:11.631452+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:12.631605+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:13.631768+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:14.631942+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:15.632086+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:16.632220+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:17.632367+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:18.632550+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:19.632745+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:20.632897+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:21.633015+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:22.633200+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:23.633419+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:24.633585+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:25.633715+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:26.633834+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:27.633981+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:28.634167+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:29.634252+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:30.634368+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:31.634514+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:32.634697+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:33.634879+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:34.635017+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:35.635176+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:36.635325+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:37.635489+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:38.635604+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:39.635763+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:40.635921+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:41.636049+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:42.636182+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:43.636371+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:44.636486+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:45.636614+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:46.636762+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:47.636847+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:48.636955+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:49.637104+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:50.637457+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:51.637643+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:52.637818+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:53.638045+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:54.638200+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:55.638341+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:56.638475+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:57.638664+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:58.638839+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:59.639055+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:00.639215+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:01.639391+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:02.639896+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:03.641058+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:04.641609+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:05.641749+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:06.642380+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:07.642513+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:08.642719+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:09.642917+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:10.643120+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:11.643395+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:12.643586+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:13.643790+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:14.643953+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:15.644122+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:16.644284+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:17.644432+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:18.644568+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:19.644820+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:20.644979+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:21.645196+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:22.645387+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:23.645618+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:24.645777+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:25.645944+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:26.646188+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:27.646354+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:28.646594+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:29.646788+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:30.646961+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:31.647166+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:32.647305+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:33.647493+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:34.647624+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:35.647849+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:36.648044+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:37.648305+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:38.648511+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:39.648698+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:40.648873+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:41.649052+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:42.649255+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:43.649487+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:44.649657+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:45.649818+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:46.649970+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:47.650197+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:48.650359+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:49.650544+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:50.650732+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:51.650896+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:52.651039+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:53.651210+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:54.651379+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:55.651639+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:56.651851+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:57.652012+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:58.652214+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:59.652367+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:00.652672+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:01.652848+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:02.653068+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:03.653282+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:04.653433+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:05.653571+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:06.653738+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:07.653947+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:08.654190+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:09.654385+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:10.654546+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:11.654718+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:12.654911+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:13.655202+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:14.655403+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:15.655559+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:16.655757+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:17.655945+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:18.656114+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:19.656380+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:20.656910+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:21.657061+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:22.657194+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:23.657446+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:24.657688+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:25.657880+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:26.658076+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:27.658276+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:28.658472+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:29.658627+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:30.658822+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:31.658990+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:32.659187+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:33.659370+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:34.659518+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:35.659740+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:36.659884+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:37.660023+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:38.660203+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:39.660366+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:40.660569+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:41.660747+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:42.660931+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:43.661208+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:44.661353+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:45.661544+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:46.661715+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:47.661894+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:48.662086+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:49.662233+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:50.662360+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:51.662519+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:52.662663+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:53.662868+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:54.663045+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:55.663209+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:56.663356+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:57.663542+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:58.663737+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:59.663908+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:00.664091+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:01.664308+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:02.664474+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:03.664713+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:04.664912+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:05.665058+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:06.665204+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:07.665379+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:08.665554+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cd000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:09.665746+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 688128 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 handle_osd_map epochs [47,47], i have 46, src has [1,47]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 46 handle_osd_map epochs [46,47], i have 47, src has [1,47]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 914.842529297s of 914.901733398s, submitted: 10
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:10.665853+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 47 handle_osd_map epochs [48,48], i have 47, src has [1,48]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 47 handle_osd_map epochs [47,48], i have 48, src has [1,48]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:11.666016+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 17293312 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 48 handle_osd_map epochs [48,49], i have 48, src has [1,49]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 49 ms_handle_reset con 0x5579fd8cd000 session 0x5579fef88a80
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:12.666191+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 49 heartbeat osd_stat(store_statfs(0x4fd466000/0x0/0x4ffc00000, data 0xd12692/0xd60000, compress 0x0/0x0/0x0, omap 0x5ee7, meta 0x1a2a119), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 17285120 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 599269 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:13.666338+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 17063936 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 49 handle_osd_map epochs [49,50], i have 49, src has [1,50]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 50 ms_handle_reset con 0x5579ff11e400 session 0x5579ff2f21c0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:14.666506+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 17063936 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:15.666636+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 50 heartbeat osd_stat(store_statfs(0x4fcc66000/0x0/0x4ffc00000, data 0x1513c8b/0x1564000, compress 0x0/0x0/0x0, omap 0x64aa, meta 0x1a29b56), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 17063936 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:16.666792+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 17063936 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:17.666951+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 17063936 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 645844 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:18.667298+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 17063936 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 50 handle_osd_map epochs [51,51], i have 50, src has [1,51]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:19.667471+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 51 heartbeat osd_stat(store_statfs(0x4fcc63000/0x0/0x4ffc00000, data 0x151513b/0x1567000, compress 0x0/0x0/0x0, omap 0x6787, meta 0x1a29879), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 17031168 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14778 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:45 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 29 09:35:45 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-ucpkkb[75469]: 2026-01-29T09:35:45.431+0000 7f5f5ebc1640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:20.667687+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 17031168 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 51 heartbeat osd_stat(store_statfs(0x4fcc63000/0x0/0x4ffc00000, data 0x151513b/0x1567000, compress 0x0/0x0/0x0, omap 0x6787, meta 0x1a29879), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:21.667853+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 17031168 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 51 heartbeat osd_stat(store_statfs(0x4fcc63000/0x0/0x4ffc00000, data 0x151513b/0x1567000, compress 0x0/0x0/0x0, omap 0x6787, meta 0x1a29879), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:22.668003+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 17031168 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 648472 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:23.668184+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 17031168 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:24.668334+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 17031168 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:25.668486+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 17031168 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:26.668654+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 17031168 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:27.668779+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 17031168 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 648472 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Cumulative writes: 4402 writes, 20K keys, 4402 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4402 writes, 440 syncs, 10.00 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 80 writes, 336 keys, 80 commit groups, 1.0 writes per commit group, ingest: 0.20 MB, 0.00 MB/s
                                           Interval WAL: 80 writes, 34 syncs, 2.35 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 51 heartbeat osd_stat(store_statfs(0x4fcc63000/0x0/0x4ffc00000, data 0x151513b/0x1567000, compress 0x0/0x0/0x0, omap 0x6787, meta 0x1a29879), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:28.668958+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:29.669074+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:30.669201+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:31.669381+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 51 heartbeat osd_stat(store_statfs(0x4fcc63000/0x0/0x4ffc00000, data 0x151513b/0x1567000, compress 0x0/0x0/0x0, omap 0x6787, meta 0x1a29879), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:32.669525+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 648472 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:33.669679+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:34.669879+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 51 heartbeat osd_stat(store_statfs(0x4fcc63000/0x0/0x4ffc00000, data 0x151513b/0x1567000, compress 0x0/0x0/0x0, omap 0x6787, meta 0x1a29879), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:35.670043+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:36.670198+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:37.670332+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 648472 data_alloc: 218103808 data_used: 0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:38.670459+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:39.670623+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 51 heartbeat osd_stat(store_statfs(0x4fcc63000/0x0/0x4ffc00000, data 0x151513b/0x1567000, compress 0x0/0x0/0x0, omap 0x6787, meta 0x1a29879), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:40.670774+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:41.670998+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e800
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 16867328 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11ec00
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:42.671112+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.686981201s of 32.861743927s, submitted: 46
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 51 handle_osd_map epochs [52,52], i have 51, src has [1,52]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 52 ms_handle_reset con 0x5579ff11e800 session 0x5579fefbbdc0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 52 ms_handle_reset con 0x5579ff11ec00 session 0x5579fef9fa40
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 16637952 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 654840 data_alloc: 218103808 data_used: 19
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:43.671290+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11f000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 52 ms_handle_reset con 0x5579ff11f000 session 0x5579ff2f3c00
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11f000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 52 ms_handle_reset con 0x5579ff11f000 session 0x5579fd0c3500
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 16809984 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:44.671417+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 52 heartbeat osd_stat(store_statfs(0x4fcc5e000/0x0/0x4ffc00000, data 0x1516b2b/0x156c000, compress 0x0/0x0/0x0, omap 0x6a1e, meta 0x1a295e2), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 52 handle_osd_map epochs [53,53], i have 53, src has [1,53]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 52 handle_osd_map epochs [53,53], i have 53, src has [1,53]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 16769024 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 53 ms_handle_reset con 0x5579ff11e400 session 0x5579fef89dc0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e800
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11f800
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 53 ms_handle_reset con 0x5579ff11f800 session 0x5579ff30bdc0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11f400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 53 ms_handle_reset con 0x5579ff11f400 session 0x5579ff30a8c0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:45.671540+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 15106048 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 53 heartbeat osd_stat(store_statfs(0x4fcc59000/0x0/0x4ffc00000, data 0x151813e/0x1571000, compress 0x0/0x0/0x0, omap 0x6ec1, meta 0x1a2913f), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050bc00
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:46.671664+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 53 handle_osd_map epochs [54,54], i have 53, src has [1,54]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 23248896 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 54 ms_handle_reset con 0x557a0050bc00 session 0x5579ff2f3a40
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:47.671805+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 22011904 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914755 data_alloc: 218103808 data_used: 19
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:48.671961+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 54 handle_osd_map epochs [55,55], i have 54, src has [1,55]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 22151168 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 55 ms_handle_reset con 0x5579ff11e800 session 0x5579fd223dc0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 55 ms_handle_reset con 0x5579ff11e400 session 0x5579fcc7a8c0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 55 heartbeat osd_stat(store_statfs(0x4f9c56000/0x0/0x4ffc00000, data 0x451973f/0x4573000, compress 0x0/0x0/0x0, omap 0x71b7, meta 0x1a28e49), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:49.672106+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 22151168 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11f000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:50.672249+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 55 handle_osd_map epochs [55,56], i have 56, src has [1,56]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 22093824 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 56 ms_handle_reset con 0x5579ff11f000 session 0x5579fd223dc0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:51.672379+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050b400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 19963904 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 56 handle_osd_map epochs [57,57], i have 56, src has [1,57]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 57 ms_handle_reset con 0x557a0050b400 session 0x5579fde81340
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:52.672562+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 19947520 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 679287 data_alloc: 218103808 data_used: 8138
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:53.672732+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050b000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.603509903s of 11.047701836s, submitted: 162
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 19890176 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 58 heartbeat osd_stat(store_statfs(0x4fcc51000/0x0/0x4ffc00000, data 0x151d579/0x1579000, compress 0x0/0x0/0x0, omap 0x7e47, meta 0x1a281b9), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:54.672885+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 59 ms_handle_reset con 0x557a0050b000 session 0x5579fcc7aa80
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 19881984 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050b000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:55.673027+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 59 handle_osd_map epochs [60,60], i have 59, src has [1,60]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 60 ms_handle_reset con 0x557a0050b000 session 0x5579fce75340
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 18653184 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:56.673219+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 60 handle_osd_map epochs [61,61], i have 60, src has [1,61]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 61 ms_handle_reset con 0x5579ff11e400 session 0x5579fef59340
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 18604032 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 61 handle_osd_map epochs [61,62], i have 61, src has [1,62]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:57.673377+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 18579456 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 701860 data_alloc: 218103808 data_used: 8138
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fcc3b000/0x0/0x4ffc00000, data 0x1524597/0x158b000, compress 0x0/0x0/0x0, omap 0x9675, meta 0x1a2698b), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050ac00
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:58.673545+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 18628608 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:59.673712+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 63 ms_handle_reset con 0x557a0050ac00 session 0x5579fcc7bdc0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fcc40000/0x0/0x4ffc00000, data 0x15245ba/0x158c000, compress 0x0/0x0/0x0, omap 0x9675, meta 0x1a2698b), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 18595840 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050a800
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:00.673862+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 64 ms_handle_reset con 0x557a0050a800 session 0x5579fce74fc0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 18595840 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cd000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:01.674004+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fcc35000/0x0/0x4ffc00000, data 0x15271af/0x1593000, compress 0x0/0x0/0x0, omap 0x9b7c, meta 0x1a26484), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 65 ms_handle_reset con 0x5579fd8cd000 session 0x5579fde80e00
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 18587648 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cd000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 65 ms_handle_reset con 0x5579fd8cd000 session 0x5579fce75880
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050a800
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 65 heartbeat osd_stat(store_statfs(0x4fcc35000/0x0/0x4ffc00000, data 0x15271af/0x1593000, compress 0x0/0x0/0x0, omap 0x9b7c, meta 0x1a26484), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff282c00
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 66 ms_handle_reset con 0x5579ff282c00 session 0x5579fce75180
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:02.674170+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff283000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 18382848 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 723773 data_alloc: 218103808 data_used: 8154
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:03.674361+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.428085327s of 10.007885933s, submitted: 151
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 67 ms_handle_reset con 0x5579ff283000 session 0x5579fd0b7180
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 67 ms_handle_reset con 0x557a0050a800 session 0x5579fcc7a8c0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 67 ms_handle_reset con 0x5579ff11e400 session 0x5579fd0c2700
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 17211392 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 67 ms_handle_reset con 0x5579ff11e400 session 0x5579ff117880
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff283000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:04.674482+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 68 ms_handle_reset con 0x5579ff283000 session 0x5579fc4c7340
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17309696 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fcc29000/0x0/0x4ffc00000, data 0x152cc08/0x15a3000, compress 0x0/0x0/0x0, omap 0xab60, meta 0x1a254a0), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:05.674617+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17309696 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:06.674776+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17309696 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 68 handle_osd_map epochs [68,69], i have 68, src has [1,69]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:07.674972+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17309696 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 740385 data_alloc: 218103808 data_used: 8154
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff282c00
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:08.675207+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17268736 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 70 ms_handle_reset con 0x5579ff282c00 session 0x5579fc4c6a80
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:09.675338+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 70 heartbeat osd_stat(store_statfs(0x4fcc1c000/0x0/0x4ffc00000, data 0x1530c9f/0x15ab000, compress 0x0/0x0/0x0, omap 0xb303, meta 0x1a24cfd), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17227776 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:10.675479+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 71 ms_handle_reset con 0x5579fd8cc400 session 0x5579fef88700
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 17178624 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc800
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:11.675861+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 17170432 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 72 heartbeat osd_stat(store_statfs(0x4fba7f000/0x0/0x4ffc00000, data 0x1531850/0x15aa000, compress 0x0/0x0/0x0, omap 0xb613, meta 0x2bc49ed), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 72 ms_handle_reset con 0x5579fd8cc800 session 0x5579ff06cfc0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:12.676210+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 17104896 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc800
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 744608 data_alloc: 218103808 data_used: 12215
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:13.676407+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 17096704 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.926726341s of 10.408112526s, submitted: 123
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 73 ms_handle_reset con 0x5579fd8cc800 session 0x5579ff06d340
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:14.676669+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 17088512 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 74 ms_handle_reset con 0x5579fd8cc400 session 0x5579feb95880
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 74 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x1534fb9/0x15b2000, compress 0x0/0x0/0x0, omap 0xbd3f, meta 0x2bc42c1), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:15.677022+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 17063936 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 75 ms_handle_reset con 0x5579ff11e400 session 0x5579fd223500
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:16.677187+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 17022976 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff282c00
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 76 ms_handle_reset con 0x5579ff282c00 session 0x5579ff116000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:17.677383+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff283000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cd000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 77 heartbeat osd_stat(store_statfs(0x4fba70000/0x0/0x4ffc00000, data 0x1537d49/0x15b7000, compress 0x0/0x0/0x0, omap 0xc27b, meta 0x2bc3d85), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 16556032 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 764911 data_alloc: 218103808 data_used: 16788
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 77 heartbeat osd_stat(store_statfs(0x4fba47000/0x0/0x4ffc00000, data 0x155d231/0x15de000, compress 0x0/0x0/0x0, omap 0xc4f1, meta 0x2bc3b0f), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:18.677632+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 16556032 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:19.677860+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 16539648 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050ac00
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:20.678036+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 16367616 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 78 heartbeat osd_stat(store_statfs(0x4fba49000/0x0/0x4ffc00000, data 0x155e836/0x15e1000, compress 0x0/0x0/0x0, omap 0xc835, meta 0x2bc37cb), peers [0,2] op hist [0,1])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 78 ms_handle_reset con 0x557a0050ac00 session 0x5579feb95a40
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:21.678240+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 16318464 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 79 ms_handle_reset con 0x5579fd8cc400 session 0x5579fce74380
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:22.678544+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 16302080 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771120 data_alloc: 218103808 data_used: 26993
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc800
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:23.678777+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 80 ms_handle_reset con 0x5579fd8cc800 session 0x5579fce74c40
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 80 ms_handle_reset con 0x5579ff11e400 session 0x5579fd0c28c0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff282c00
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.784863472s of 10.014824867s, submitted: 147
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 16318464 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 81 ms_handle_reset con 0x5579ff282c00 session 0x5579fcc7bdc0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:24.678989+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 16293888 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050b000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:25.679183+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 82 ms_handle_reset con 0x557a0050b000 session 0x5579fd2236c0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 16228352 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 82 heartbeat osd_stat(store_statfs(0x4fba3e000/0x0/0x4ffc00000, data 0x1563f71/0x15ec000, compress 0x0/0x0/0x0, omap 0xd794, meta 0x2bc286c), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 82 handle_osd_map epochs [83,83], i have 83, src has [1,83]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:26.679346+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 83 ms_handle_reset con 0x5579fd8cc400 session 0x5579ff30bdc0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 16228352 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:27.679621+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 16228352 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 781447 data_alloc: 218103808 data_used: 27012
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:28.679892+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 16228352 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:29.680178+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 16228352 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread fragmentation_score=0.000119 took=0.000017s
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:30.680397+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc800
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 83 ms_handle_reset con 0x5579fd8cc800 session 0x5579ff30ac40
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff282c00
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 83 ms_handle_reset con 0x5579ff282c00 session 0x5579fef88540
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 83 ms_handle_reset con 0x5579ff11e400 session 0x5579ff30b340
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050b400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 83 ms_handle_reset con 0x557a0050b400 session 0x5579fef888c0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 83 ms_handle_reset con 0x5579fd8cc400 session 0x5579fef58fc0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc800
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 83 ms_handle_reset con 0x5579fd8cc800 session 0x5579fef59a40
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 83 ms_handle_reset con 0x5579ff11e400 session 0x5579fef581c0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 16080896 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:31.680561+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 83 heartbeat osd_stat(store_statfs(0x4fba3c000/0x0/0x4ffc00000, data 0x15655b8/0x15f0000, compress 0x0/0x0/0x0, omap 0xda19, meta 0x2bc25e7), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 16080896 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:32.680702+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff282c00
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 84 ms_handle_reset con 0x5579ff282c00 session 0x5579ff2da540
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 15007744 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 787085 data_alloc: 218103808 data_used: 27012
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:33.680884+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050b800
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 15007744 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:34.681040+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 84 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1566ac3/0x15f4000, compress 0x0/0x0/0x0, omap 0xdca2, meta 0x2bc235e), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 15007744 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:35.681212+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 15007744 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:36.681372+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 15007744 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050bc00
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 84 ms_handle_reset con 0x557a0050bc00 session 0x5579ff2f3a40
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.167292595s of 13.318862915s, submitted: 74
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:37.681525+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 14999552 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 787285 data_alloc: 218103808 data_used: 27078
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 84 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1566ac3/0x15f4000, compress 0x0/0x0/0x0, omap 0xdca2, meta 0x2bc235e), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 85 ms_handle_reset con 0x5579fd8cc400 session 0x5579ff2dafc0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc800
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 85 ms_handle_reset con 0x5579fd8cc800 session 0x5579ff2da1c0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 85 ms_handle_reset con 0x5579ff11e400 session 0x5579fef58000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff282c00
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:38.681688+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 85 ms_handle_reset con 0x5579ff282c00 session 0x5579ff30a1c0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050a800
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 85 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1566ac3/0x15f4000, compress 0x0/0x0/0x0, omap 0xdca2, meta 0x2bc235e), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 14794752 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 86 ms_handle_reset con 0x557a0050a800 session 0x5579fef58380
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:39.681913+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 14786560 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 87 ms_handle_reset con 0x5579fd8cc400 session 0x5579fd0c2a80
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:40.682082+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 14696448 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc800
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 87 ms_handle_reset con 0x5579fd8cc800 session 0x5579fd0b76c0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 87 ms_handle_reset con 0x5579ff11e400 session 0x5579fef88e00
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff282c00
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 87 ms_handle_reset con 0x5579ff282c00 session 0x5579fde81c00
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:41.682280+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 14671872 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 87 heartbeat osd_stat(store_statfs(0x4fba2a000/0x0/0x4ffc00000, data 0x156ad2e/0x1600000, compress 0x0/0x0/0x0, omap 0xe81b, meta 0x2bc17e5), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd903800
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:42.682403+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 87 heartbeat osd_stat(store_statfs(0x4fba2a000/0x0/0x4ffc00000, data 0x156ad2e/0x1600000, compress 0x0/0x0/0x0, omap 0xe81b, meta 0x2bc17e5), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 87 ms_handle_reset con 0x5579fd903800 session 0x5579fd0b6c40
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd903800
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 14671872 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 801919 data_alloc: 218103808 data_used: 31803
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:43.682919+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 88 heartbeat osd_stat(store_statfs(0x4fba2a000/0x0/0x4ffc00000, data 0x156acec/0x15ff000, compress 0x0/0x0/0x0, omap 0xea3c, meta 0x2bc15c4), peers [0,2] op hist [2])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 88 ms_handle_reset con 0x5579fd903800 session 0x5579fde80e00
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 14614528 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:44.683592+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 14614528 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 88 ms_handle_reset con 0x557a0050b800 session 0x5579ff06da40
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:45.683728+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 89 ms_handle_reset con 0x5579ff11e400 session 0x5579fc4c6e00
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 14598144 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:46.683954+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 14598144 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:47.684205+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 89 heartbeat osd_stat(store_statfs(0x4fba25000/0x0/0x4ffc00000, data 0x156d8af/0x1601000, compress 0x0/0x0/0x0, omap 0xf49f, meta 0x2bc0b61), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 89 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.245538712s of 10.530361176s, submitted: 174
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 14598144 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807114 data_alloc: 218103808 data_used: 35716
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:48.684392+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 90 ms_handle_reset con 0x5579ff283000 session 0x5579fef89180
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 90 ms_handle_reset con 0x5579fd8cd000 session 0x5579fd0c2c40
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cd000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 90 ms_handle_reset con 0x5579fd8cd000 session 0x5579ff346700
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 14704640 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:49.684725+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 90 heartbeat osd_stat(store_statfs(0x4fba4a000/0x0/0x4ffc00000, data 0x154adb3/0x15e0000, compress 0x0/0x0/0x0, omap 0xf897, meta 0x2bc0769), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 90 heartbeat osd_stat(store_statfs(0x4fba4a000/0x0/0x4ffc00000, data 0x154ad90/0x15df000, compress 0x0/0x0/0x0, omap 0xf89b, meta 0x2bc0765), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 14704640 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:50.685067+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 90 ms_handle_reset con 0x5579ff11e400 session 0x5579ff117880
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff283000
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 13631488 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:51.685236+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 90 handle_osd_map epochs [90,91], i have 91, src has [1,91]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 91 ms_handle_reset con 0x5579ff283000 session 0x5579ff3476c0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 13615104 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 91 heartbeat osd_stat(store_statfs(0x4fba48000/0x0/0x4ffc00000, data 0x154c3c1/0x15e2000, compress 0x0/0x0/0x0, omap 0xff15, meta 0x2bc00eb), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:52.685395+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 13615104 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 806746 data_alloc: 218103808 data_used: 33598
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:53.685571+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 13606912 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:54.685878+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 13606912 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:55.686212+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 13606912 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:56.686363+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 13606912 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:57.686560+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.008848190s of 10.286990166s, submitted: 92
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 13606912 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba45000/0x0/0x4ffc00000, data 0x154d88d/0x15e5000, compress 0x0/0x0/0x0, omap 0x101b5, meta 0x2bbfe4b), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:58.686750+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 13606912 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:59.686932+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 13606912 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:00.687207+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 13606912 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:01.687416+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:02.687666+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:03.687922+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:04.688195+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:05.688399+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:06.688566+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:07.688707+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:08.688827+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:09.688966+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:10.689092+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:11.689233+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:12.689358+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:13.689519+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:14.689764+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:15.689907+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:16.690042+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:17.690207+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:18.690334+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:19.690498+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:20.690682+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:21.690831+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:22.691011+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:23.691191+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:24.691344+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:25.691508+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:26.691629+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:27.691781+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:28.691902+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:29.692047+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:30.692219+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:31.692324+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:32.692468+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:33.692866+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:34.693026+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:35.693163+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:36.693307+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:37.693405+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:38.693532+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:39.693736+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:40.693952+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:41.694083+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:42.694268+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:43.694467+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:44.694622+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:45.694847+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:46.694995+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:47.695188+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:48.695337+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:49.695480+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:50.695626+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:51.695773+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:52.695925+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:53.696089+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:54.696250+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:55.696487+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:56.696643+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:57.696815+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:58.696946+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:59.697076+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:00.697229+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:01.697374+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:02.697470+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:03.697627+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:04.697756+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:05.697902+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:06.698027+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:07.698191+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:08.698306+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:09.698407+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:10.698536+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:11.698665+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:12.698785+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78807040 unmapped: 13475840 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: do_command 'config diff' '{prefix=config diff}'
Jan 29 09:35:45 compute-0 ceph-osd[87035]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 29 09:35:45 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:35:45 compute-0 ceph-osd[87035]: do_command 'config show' '{prefix=config show}'
Jan 29 09:35:45 compute-0 ceph-osd[87035]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 29 09:35:45 compute-0 ceph-osd[87035]: do_command 'counter dump' '{prefix=counter dump}'
Jan 29 09:35:45 compute-0 ceph-osd[87035]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 29 09:35:45 compute-0 ceph-osd[87035]: do_command 'counter schema' '{prefix=counter schema}'
Jan 29 09:35:45 compute-0 ceph-osd[87035]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:13.698930+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:45 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79200256 unmapped: 13082624 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:35:45 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:14.699050+0000)
Jan 29 09:35:45 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 12869632 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:45 compute-0 ceph-osd[87035]: do_command 'log dump' '{prefix=log dump}'
Jan 29 09:35:45 compute-0 nova_compute[236255]: 2026-01-29 09:35:45.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:35:45 compute-0 nova_compute[236255]: 2026-01-29 09:35:45.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:35:45 compute-0 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 09:35:45 compute-0 ceph-mon[75183]: from='client.14770 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:45 compute-0 ceph-mon[75183]: pgmap v798: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:45 compute-0 ceph-mon[75183]: from='client.14774 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:45 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1919191723' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 29 09:35:45 compute-0 ceph-mon[75183]: from='client.14778 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:45 compute-0 nova_compute[236255]: 2026-01-29 09:35:45.650 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:35:45 compute-0 nova_compute[236255]: 2026-01-29 09:35:45.651 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:35:45 compute-0 nova_compute[236255]: 2026-01-29 09:35:45.651 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:35:45 compute-0 nova_compute[236255]: 2026-01-29 09:35:45.651 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 29 09:35:45 compute-0 nova_compute[236255]: 2026-01-29 09:35:45.651 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:35:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 29 09:35:45 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2604996642' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 29 09:35:46 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 29 09:35:46 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/639586521' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 29 09:35:46 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:35:46 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2142381333' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:35:46 compute-0 nova_compute[236255]: 2026-01-29 09:35:46.210 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:35:46 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 29 09:35:46 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2683301458' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 29 09:35:46 compute-0 nova_compute[236255]: 2026-01-29 09:35:46.333 236262 WARNING nova.virt.libvirt.driver [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 29 09:35:46 compute-0 nova_compute[236255]: 2026-01-29 09:35:46.335 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4954MB free_disk=59.98826471157372GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 29 09:35:46 compute-0 nova_compute[236255]: 2026-01-29 09:35:46.335 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:35:46 compute-0 nova_compute[236255]: 2026-01-29 09:35:46.336 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:35:46 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 29 09:35:46 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3972336268' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 29 09:35:46 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2604996642' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 29 09:35:46 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/639586521' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 29 09:35:46 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2142381333' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:35:46 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2683301458' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 29 09:35:46 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3972336268' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 29 09:35:46 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 29 09:35:46 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/582605174' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 29 09:35:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v799: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 29 09:35:47 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2921820971' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 29 09:35:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 29 09:35:47 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1203953422' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 29 09:35:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 29 09:35:47 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3721405701' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 29 09:35:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 29 09:35:47 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/142288172' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 29 09:35:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:35:47 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/582605174' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 29 09:35:47 compute-0 ceph-mon[75183]: pgmap v799: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:47 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2921820971' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 29 09:35:47 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1203953422' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 29 09:35:47 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3721405701' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 29 09:35:47 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/142288172' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 29 09:35:47 compute-0 nova_compute[236255]: 2026-01-29 09:35:47.855 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 29 09:35:47 compute-0 nova_compute[236255]: 2026-01-29 09:35:47.856 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 29 09:35:47 compute-0 nova_compute[236255]: 2026-01-29 09:35:47.879 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:35:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 29 09:35:48 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2182184971' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 29 09:35:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 29 09:35:48 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4215580313' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 29 09:35:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:35:48 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3022718119' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:35:48 compute-0 nova_compute[236255]: 2026-01-29 09:35:48.391 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:35:48 compute-0 nova_compute[236255]: 2026-01-29 09:35:48.395 236262 DEBUG nova.compute.provider_tree [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed in ProviderTree for provider: 2689825d-8fa0-473a-adf1-5005faba9bec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 29 09:35:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 29 09:35:48 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3394705945' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 29 09:35:48 compute-0 nova_compute[236255]: 2026-01-29 09:35:48.463 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed for provider 2689825d-8fa0-473a-adf1-5005faba9bec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 29 09:35:48 compute-0 nova_compute[236255]: 2026-01-29 09:35:48.464 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 29 09:35:48 compute-0 nova_compute[236255]: 2026-01-29 09:35:48.464 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:35:48 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2182184971' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 29 09:35:48 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/4215580313' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 29 09:35:48 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3022718119' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:35:48 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3394705945' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 29 09:35:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 29 09:35:48 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2996753216' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 29 09:35:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v800: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 29 09:35:48 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/949738712' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 29 09:35:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 29 09:35:49 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3070989499' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 29 09:35:49 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14814 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007002 3 0.000046
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007235 3 0.000072
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006910 3 0.000059
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007023 3 0.000038
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006618 3 0.000074
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006923 3 0.000059
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=41/43 n=0 ec=23/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006825 3 0.000090
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=41/43 n=0 ec=23/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.3( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006528 3 0.000047
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=41/43 n=0 ec=23/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.3( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=41/43 n=0 ec=23/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.3( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.3( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006376 3 0.000039
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.19( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006386 3 0.000081
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.19( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.19( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006343 3 0.000055
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.19( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006331 3 0.000062
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006225 3 0.000071
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006818 3 0.000481
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012126 3 0.000087
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.6( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.6( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012202 3 0.000116
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.6( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012386 3 0.000147
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.6( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.6( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012143 3 0.000205
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000031 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012252 3 0.000046
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011987 3 0.000070
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012421 3 0.000246
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012128 3 0.000150
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011907 3 0.000083
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012099 3 0.000141
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/23 les/c/f=43/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:32.666283+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 43 heartbeat osd_stat(store_statfs(0x4fe162000/0x0/0x4ffc00000, data 0x284ce/0x68000, compress 0x0/0x0/0x0, omap 0x4b03, meta 0x1a2b4fd), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 58753024 unmapped: 3022848 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 43 handle_osd_map epochs [43,44], i have 43, src has [1,44]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.044814 4 0.000039
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000034 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000062 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.045921 4 0.000013
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000030 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.046101 4 0.000014
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000030 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000052 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.045229 4 0.000015
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000026 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.045454 4 0.000013
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000010 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000042 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.046526 4 0.000039
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.046469 4 0.000062
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000021 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000050 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000089 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000051 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000127 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.046390 4 0.000013
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000027 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.047250 4 0.000038
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000012 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000042 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.047284 4 0.000031
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000012 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000046 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.047502 4 0.000016
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000031 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002058 3 0.000177
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.047574 4 0.000017
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000040 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.047880 4 0.000021
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000024 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000055 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.047459 4 0.000013
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000029 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=43 pruub=15.388912201s) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering pruub 91.221656799s@ mbc={}] exit Started/Primary/Peering/WaitUpThru 1.049109 3 0.000101
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=43 pruub=15.388912201s) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering pruub 91.221656799s@ mbc={}] exit Started/Primary/Peering 1.049189 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=43 pruub=15.388912201s) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown pruub 91.221656799s@ mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.0( empty local-lis/les=43/44 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.047976 4 0.000012
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000043 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000069 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.047932 4 0.000013
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000010 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000043 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.048158 4 0.000013
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000035 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.048844 4 0.000019
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000030 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 nova_compute[236255]: 2026-01-29 09:35:49.465 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:35:49 compute-0 nova_compute[236255]: 2026-01-29 09:35:49.465 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003536 3 0.000101
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.049051 4 0.000027
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000013 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.048921 4 0.000011
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 nova_compute[236255]: 2026-01-29 09:35:49.465 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000068 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000065 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.049397 4 0.000014
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 44 handle_osd_map epochs [44,44], i have 44, src has [1,44]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003705 3 0.000100
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 nova_compute[236255]: 2026-01-29 09:35:49.465 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000012 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000049 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.049465 4 0.000017
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000032 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.049309 4 0.000016
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000010 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.049647 4 0.000013
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000049 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.049688 4 0.000013
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000031 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.049935 4 0.000025
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000009 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.049790 4 0.000019
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000038 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000012 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.049762 4 0.000015
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000018 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000053 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000020 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000051 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.049864 4 0.000014
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.049601 4 0.000051
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000040 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 1.049648 4 0.000014
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000580 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000012 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000012 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000039 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000055 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003976 3 0.000165
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004564 3 0.000090
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.10( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003905 3 0.000064
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003690 3 0.000090
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003984 3 0.000254
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003556 3 0.000086
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004699 3 0.000410
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.0( empty local-lis/les=43/44 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.16( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003617 3 0.000071
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003188 3 0.000104
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003439 3 0.000084
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003031 3 0.000068
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.0( empty local-lis/les=43/44 n=0 ec=27/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002928 3 0.000067
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.0( empty local-lis/les=43/44 n=0 ec=27/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.0( empty local-lis/les=43/44 n=0 ec=27/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.0( empty local-lis/les=43/44 n=0 ec=27/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002819 3 0.000119
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002645 3 0.000096
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002461 3 0.000095
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.001060 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.028880 3 0.000189
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.028882 3 0.000113
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.18( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.028687 3 0.000111
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.029329 3 0.000484
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.028676 3 0.000082
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.028598 3 0.000119
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.028516 3 0.000077
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.028417 3 0.000100
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000028 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.028378 3 0.000141
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.028356 3 0.000115
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.028215 3 0.000099
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.028153 3 0.000668
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.028180 3 0.000133
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.029779 3 0.001146
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/27 les/c/f=44/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:33.666446+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 58941440 unmapped: 2834432 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:34.666583+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 59072512 unmapped: 2703360 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:35.666771+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 59072512 unmapped: 2703360 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 44 handle_osd_map epochs [44,45], i have 44, src has [1,45]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.1e(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000164 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000043 1 0.000065
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000258 1 0.000055
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.19(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000043 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000071 1 0.000031
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.18(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000054 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000059 1 0.000034
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.7(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000048 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000016
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000051 1 0.000035
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1d(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000017
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 1 0.000025
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.4(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000039 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000013
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000037 1 0.000024
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1c(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000037 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000012
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000075 1 0.000024
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.f(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000061 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000013
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000044 1 0.000028
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.2(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000046 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000016
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000051 1 0.000038
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.5(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000042 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000014
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000048 1 0.000031
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1f(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000013
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000049 1 0.000031
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.2(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000034 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000049 1 0.000022
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.3(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000037 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000012
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000035 1 0.000027
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.b(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000041 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000023
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000024
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.8(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000171 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000021 1 0.000041
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000193 1 0.000071
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.16(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000100 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000021
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000070 1 0.000037
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.15(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000160 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000020
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000069 1 0.000041
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.13(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000042 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000013
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000045
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.14(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000041 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=0 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000012
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.005860 1 0.000025
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=0/0 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.11(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000118 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000077 1 0.000083
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000141 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.082106 4 0.000054
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.089717 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.095145 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.095177 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.917540550s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.838790894s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.917479515s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838790894s@ mbc={}] exit Reset 0.000136 1 0.000200
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.917479515s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838790894s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.917479515s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838790894s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.917479515s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838790894s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.917479515s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838790894s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.917479515s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838790894s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 45 handle_osd_map epochs [45,45], i have 45, src has [1,45]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.035893 1 0.000038
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.039463 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.039502 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.039539 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963941574s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.885498047s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963922501s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.885498047s@ mbc={}] exit Reset 0.000042 1 0.000082
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963922501s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.885498047s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963922501s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.885498047s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012062 2 0.000067
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963922501s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.885498047s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963922501s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.885498047s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963922501s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.885498047s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.035743 1 0.000024
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.039476 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.039539 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.039560 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963768005s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.885505676s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963748932s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.885505676s@ mbc={}] exit Reset 0.000037 1 0.000057
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963748932s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.885505676s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963748932s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.885505676s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963748932s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.885505676s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963748932s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.885505676s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963748932s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.885505676s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.034820 1 0.000026
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.039417 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.039456 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.039490 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.082786 4 0.000083
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.090056 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.096255 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.096285 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916956902s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.838867188s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964960098s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.886856079s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000398 1 0.000187
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916940689s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838867188s@ mbc={}] exit Reset 0.000051 1 0.000066
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916940689s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838867188s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916940689s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838867188s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916940689s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838867188s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916940689s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838867188s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916940689s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838867188s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964897156s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886856079s@ mbc={}] exit Reset 0.000117 1 0.000156
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964897156s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886856079s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964897156s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886856079s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964897156s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886856079s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964897156s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886856079s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964897156s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886856079s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.082825 4 0.000094
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.090207 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.096598 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.035038 1 0.000033
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.096621 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.039076 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.039219 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.039253 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916765213s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.838905334s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964747429s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.886917114s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916724205s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838905334s@ mbc={}] exit Reset 0.000081 1 0.000094
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916724205s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838905334s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916724205s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838905334s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916724205s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838905334s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964704514s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886917114s@ mbc={}] exit Reset 0.000072 1 0.000082
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916724205s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838905334s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964704514s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886917114s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916724205s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838905334s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964704514s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886917114s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964704514s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886917114s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964704514s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886917114s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964704514s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886917114s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.083062 4 0.000038
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.090334 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.096974 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.096998 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.035382 1 0.000020
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.039317 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.083124 4 0.000041
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916565895s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.838989258s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.039357 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.090260 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.097090 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916548729s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838989258s@ mbc={}] exit Reset 0.000035 1 0.000062
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916548729s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838989258s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916548729s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838989258s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.039387 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916548729s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838989258s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916548729s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838989258s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.097119 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916548729s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838989258s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916506767s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.838989258s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916488647s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838989258s@ mbc={}] exit Reset 0.000041 1 0.000083
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916488647s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838989258s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916488647s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838989258s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916488647s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838989258s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916488647s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838989258s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916488647s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838989258s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.083244 4 0.000052
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.090280 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.097271 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.097294 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916416168s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.839012146s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916399002s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839012146s@ mbc={}] exit Reset 0.000035 1 0.000063
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916399002s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839012146s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.035504 1 0.000020
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916399002s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839012146s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916399002s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839012146s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.039088 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916399002s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839012146s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916399002s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839012146s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.039146 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.039209 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964230537s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.886940002s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964209557s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886940002s@ mbc={}] exit Reset 0.000046 1 0.000114
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964209557s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886940002s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012982 2 0.000030
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964209557s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886940002s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964209557s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886940002s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964209557s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886940002s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964209557s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886940002s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.083431 4 0.000065
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.090501 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.097642 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.097678 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.035426 1 0.000025
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.039070 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.039114 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.039135 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916178703s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.839042664s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964348793s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.887237549s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916154861s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839042664s@ mbc={}] exit Reset 0.000059 1 0.000083
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916154861s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839042664s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916154861s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839042664s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916154861s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839042664s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916154861s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839042664s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916154861s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839042664s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964330673s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887237549s@ mbc={}] exit Reset 0.000054 1 0.000069
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964330673s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887237549s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964330673s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887237549s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964330673s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887237549s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964330673s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887237549s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964330673s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887237549s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.083614 4 0.000035
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.090580 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.098959 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013041 2 0.000025
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963886261s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.886901855s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.098994 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963843346s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886901855s@ mbc={}] exit Reset 0.000607 1 0.000648
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.916011810s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.839080811s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963843346s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886901855s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963843346s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886901855s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963843346s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886901855s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.035616 1 0.000043
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963843346s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886901855s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.039095 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963843346s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.886901855s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.039149 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915985107s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839080811s@ mbc={}] exit Reset 0.000055 1 0.000088
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915985107s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839080811s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915985107s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839080811s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915985107s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839080811s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.039171 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915985107s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839080811s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915985107s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839080811s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964128494s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.887260437s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964110374s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887260437s@ mbc={}] exit Reset 0.000044 1 0.000073
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964110374s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887260437s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964110374s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887260437s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964110374s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887260437s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964110374s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887260437s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964110374s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887260437s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.035734 1 0.000020
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.038969 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.039038 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.039062 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964040756s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.887283325s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964025497s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887283325s@ mbc={}] exit Reset 0.000032 1 0.000074
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964025497s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887283325s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964025497s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887283325s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964025497s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887283325s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964025497s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887283325s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.964025497s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887283325s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.035817 1 0.000027
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.038878 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.038919 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.038941 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.083957 4 0.000055
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.090629 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.097124 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963943481s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.887313843s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.084131 4 0.000036
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.091407 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.098004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963914871s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887313843s@ mbc={}] exit Reset 0.000052 1 0.000079
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.098032 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963914871s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887313843s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963914871s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887313843s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963914871s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887313843s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.083817 4 0.000055
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963914871s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887313843s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.090702 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963914871s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887313843s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915515900s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.838935852s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.097329 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915501595s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838935852s@ mbc={}] exit Reset 0.000032 1 0.000052
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915501595s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838935852s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.097361 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915501595s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838935852s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915501595s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838935852s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915501595s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838935852s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915501595s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.838935852s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915622711s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.839096069s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915602684s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839096069s@ mbc={}] exit Reset 0.000046 1 0.000086
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915602684s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839096069s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915602684s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839096069s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915602684s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839096069s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.097297 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915602684s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839096069s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915602684s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839096069s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915547371s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.839103699s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.036020 1 0.000021
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.038696 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013443 2 0.000045
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.038754 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.038782 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963695526s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.887374878s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963669777s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887374878s@ mbc={}] exit Reset 0.000067 1 0.000092
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963669777s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887374878s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.008976 1 0.000039
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963669777s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887374878s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.038342 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963669777s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887374878s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.038384 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.084252 4 0.000039
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963669777s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887374878s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.090689 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.963669777s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.887374878s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.038405 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.098805 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.098829 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990222931s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.913986206s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915362358s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.839141846s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990189552s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.913986206s@ mbc={}] exit Reset 0.000052 1 0.000080
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990189552s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.913986206s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915343285s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839141846s@ mbc={}] exit Reset 0.000046 1 0.000099
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990189552s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.913986206s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915343285s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839141846s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.084327 4 0.000040
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990189552s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.913986206s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915343285s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839141846s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990189552s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.913986206s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.090712 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.099013 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990189552s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.913986206s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915343285s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839141846s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.099055 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915343285s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839141846s@ mbc={}] exit Start 0.000026 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915343285s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839141846s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915319443s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.839187622s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.009247 1 0.000060
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915301323s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839187622s@ mbc={}] exit Reset 0.000042 1 0.000078
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.038177 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915301323s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839187622s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.038259 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915301323s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839187622s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915301323s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839187622s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.038284 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915301323s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839187622s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915301323s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839187622s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990167618s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.914100647s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990150452s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914100647s@ mbc={}] exit Reset 0.000047 1 0.000059
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990150452s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914100647s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990150452s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914100647s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990150452s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914100647s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990150452s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914100647s@ mbc={}] exit Start 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990150452s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914100647s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.084582 4 0.000035
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915018082s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839103699s@ mbc={}] exit Reset 0.000570 1 0.000260
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.090947 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915018082s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839103699s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.099575 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915018082s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839103699s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915018082s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839103699s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915018082s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839103699s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915018082s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839103699s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.099609 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915086746s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.839225769s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.084603 4 0.000121
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915067673s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839225769s@ mbc={}] exit Reset 0.000051 1 0.000071
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915067673s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839225769s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915067673s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839225769s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915067673s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839225769s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.090959 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915067673s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839225769s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.099370 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.915067673s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839225769s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.099427 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013835 2 0.000026
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.914976120s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.839225769s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.078427 4 0.000156
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.090914 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.099742 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.914950371s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839225769s@ mbc={}] exit Reset 0.000055 1 0.000117
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.914950371s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839225769s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.099765 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.914950371s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839225769s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.914950371s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839225769s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.914950371s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839225769s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.009535 1 0.000028
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.914950371s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.839225769s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.038244 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.038288 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921164513s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.845474243s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.038308 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921147346s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845474243s@ mbc={}] exit Reset 0.000037 1 0.000060
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921147346s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845474243s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921147346s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845474243s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921147346s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845474243s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921147346s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845474243s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921147346s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845474243s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990023613s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.914367676s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990005493s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914367676s@ mbc={}] exit Reset 0.000036 1 0.000063
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.078305 4 0.000070
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990005493s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914367676s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990005493s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914367676s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.090827 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990005493s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914367676s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.099988 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990005493s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914367676s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990005493s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914367676s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.100024 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921440125s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.845909119s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.009505 1 0.000028
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.037688 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.037738 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.038335 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921417236s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845909119s@ mbc={}] exit Reset 0.000047 1 0.000086
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921417236s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845909119s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921417236s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845909119s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921417236s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845909119s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921417236s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845909119s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921417236s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845909119s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990056038s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.914581299s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990031242s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914581299s@ mbc={}] exit Reset 0.000058 1 0.000076
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990031242s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914581299s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990031242s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914581299s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990031242s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914581299s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990031242s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914581299s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.990031242s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914581299s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.078827 4 0.000058
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.091031 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.099494 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.099529 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.920965195s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.845588684s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.078538 4 0.000062
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.090841 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.099534 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.920948982s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845588684s@ mbc={}] exit Reset 0.000035 1 0.000092
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.920948982s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845588684s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.920948982s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845588684s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.099564 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.920948982s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845588684s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.920948982s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845588684s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.920948982s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845588684s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921165466s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.845848083s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.009777 1 0.000045
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.038168 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921145439s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845848083s@ mbc={}] exit Reset 0.000049 1 0.000077
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.038237 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921145439s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845848083s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921145439s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845848083s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.038265 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921145439s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845848083s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921145439s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845848083s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921145439s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845848083s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.078547 4 0.000069
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.090687 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.100432 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989798546s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.914543152s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.100470 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989780426s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914543152s@ mbc={}] exit Reset 0.000039 1 0.000068
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989780426s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914543152s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989780426s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914543152s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989780426s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914543152s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989780426s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914543152s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921191216s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 active pruub 91.845970154s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989780426s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914543152s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.007377 1 0.000039
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.037195 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.038273 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.038302 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.009967 1 0.000107
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.038211 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.038267 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.038289 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.992457390s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.917366028s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989642143s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.914573669s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921034813s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845970154s@ mbc={}] exit Reset 0.000176 1 0.000197
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921034813s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845970154s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989624977s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914573669s@ mbc={}] exit Reset 0.000032 1 0.000064
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921034813s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845970154s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989624977s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914573669s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921034813s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845970154s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989624977s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914573669s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989624977s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914573669s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921034813s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845970154s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989624977s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914573669s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.992428780s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.917366028s@ mbc={}] exit Reset 0.000075 1 0.000090
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45 pruub=11.921034813s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY pruub 91.845970154s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989624977s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914573669s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.992428780s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.917366028s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.992428780s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.917366028s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.992428780s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.917366028s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.992428780s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.917366028s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.992428780s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.917366028s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 3.010048 1 0.000042
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 3.038264 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 3.038336 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] exit Started 3.038370 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989519119s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 active pruub 92.914604187s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989503860s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914604187s@ mbc={}] exit Reset 0.000032 1 0.000052
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989503860s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914604187s@ mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989503860s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914604187s@ mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989503860s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914604187s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989503860s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914604187s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=12.989503860s) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY pruub 92.914604187s@ mbc={}] enter Started/Stray
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.17(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000110 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000014
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000105 1 0.000039
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014942 3 0.000034
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.13(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000064 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000013
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000097 1 0.000052
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015112 3 0.000030
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014781 3 0.000033
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.12(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000067 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000013
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000080 1 0.000035
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014906 3 0.000028
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014788 3 0.000031
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015422 3 0.000020
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.15(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000058 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000020
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000086 1 0.000042
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015020 3 0.000022
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.f(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000050 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000018
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000057 1 0.000040
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.9(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000054 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000019
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000084 1 0.000040
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.c(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000051 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000019
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000069 1 0.000041
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.f(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000050 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000012
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000059 1 0.000038
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.6(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000056 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000022
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000421 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000089 1 0.000473
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.4(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000061 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000012
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000077 1 0.000035
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000051 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000014
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000062 1 0.000039
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.3(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000039 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000011
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000049 1 0.000033
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.6(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000044 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000017
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000069 1 0.000050
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.3(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000042 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000012
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 1 0.000033
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.9(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000041 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000051 1 0.000031
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1f(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000045 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000012
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000074 1 0.000033
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.a(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000051 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000058 1 0.000039
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1b(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000050 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000018
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000046
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.18(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000046 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000017
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000071 1 0.000038
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1b(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000066 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=0 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000022
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.000040
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1f(unlocked)] enter Initial
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000044 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=0 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000012
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000031
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=0/0 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019348 3 0.000027
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019250 3 0.000025
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018668 3 0.000062
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018380 3 0.000042
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017788 3 0.000031
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017641 3 0.000026
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011633 3 0.000275
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016206 3 0.000920
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021499 2 0.000037
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.022058 2 0.000039
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020885 2 0.000059
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020371 2 0.000038
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020664 2 0.000052
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021471 2 0.000032
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020256 2 0.000040
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019409 2 0.000030
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020087 2 0.000027
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019218 2 0.000030
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018999 2 0.000028
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018851 2 0.000036
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018668 2 0.000033
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018481 2 0.000026
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018114 2 0.000037
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018388 2 0.000026
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017222 2 0.000042
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017734 2 0.000033
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017560 2 0.000033
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020144 2 0.000057
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019260 2 0.000027
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:36.666952+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61562880 unmapped: 212992 heap: 61775872 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 395364 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 45 handle_osd_map epochs [45,46], i have 45, src has [1,46]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 45 handle_osd_map epochs [45,46], i have 46, src has [1,46]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.963710 2 0.000047
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983079 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.976013 2 0.000228
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.993632 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1f( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.11( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966687 2 0.000036
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988284 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.12( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.12( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982952 2 0.000033
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000530 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.14( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.983048 2 0.000030
Jan 29 09:35:49 compute-0 nova_compute[236255]: 2026-01-29 09:35:49.496 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 29 09:35:49 compute-0 nova_compute[236255]: 2026-01-29 09:35:49.497 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:35:49 compute-0 nova_compute[236255]: 2026-01-29 09:35:49.497 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:35:49 compute-0 nova_compute[236255]: 2026-01-29 09:35:49.497 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000773 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.13( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.983170 2 0.000028
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001065 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.15( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967122 2 0.000031
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988155 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.15( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.15( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967386 2 0.000144
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989122 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966673 2 0.000022
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983992 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.13( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.1b( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967342 2 0.000034
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989563 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.17( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.983527 2 0.000026
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002013 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.17( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.16( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966886 2 0.000027
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985354 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.9( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.9( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.983666 2 0.000026
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002565 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.8( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.964732 2 0.000040
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.984978 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.a( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.a( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.983803 2 0.000027
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003122 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.b( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967322 2 0.000027
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.987496 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.f( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.984000 2 0.000049
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003419 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.3( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967428 2 0.000533
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986202 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.6( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.6( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.989191 2 0.000035
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004067 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1f( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967726 2 0.000028
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986670 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.3( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.3( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.988937 2 0.000043
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004049 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.989423 2 0.000041
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.2( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004412 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.5( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.989819 2 0.000029
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004691 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.2( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.989505 2 0.000031
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005009 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.f( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990062 2 0.000031
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005293 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1c( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.968186 2 0.000038
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.987719 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.6( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990613 2 0.000035
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005662 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967987 2 0.000020
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985654 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.4( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.18( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.968431 2 0.000106
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989165 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.9( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992750 3 0.000031
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006281 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.7( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.968688 2 0.000031
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989061 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.968583 2 0.000043
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.987682 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992279 3 0.000031
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006204 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1d( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.968555 2 0.000031
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.987123 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.3( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.968980 2 0.000047
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989746 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.f( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.c( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.f( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.c( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.968478 2 0.000046
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986327 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1b( empty local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1b( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.968672 2 0.000025
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986960 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993783 3 0.000036
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006920 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.1f( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.18( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.968653 2 0.000031
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988373 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995320 3 0.000076
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.4( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007731 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.1e( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994249 3 0.000044
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007350 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.19( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.11( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1f( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.12( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.14( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.13( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.15( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.11( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006761 3 0.000178
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.12( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006660 3 0.000126
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.14( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006590 3 0.000065
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.12( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.11( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.11( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.12( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.14( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.12( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.11( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.14( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000024 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.14( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.13( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006648 3 0.000065
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.13( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.13( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.13( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.15( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006634 3 0.000051
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.15( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.15( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.15( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1f( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006981 3 0.000642
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1f( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1f( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1f( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 handle_osd_map epochs [46,46], i have 46, src has [1,46]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 handle_osd_map epochs [46,46], i have 46, src has [1,46]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.13( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.15( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.17( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.1b( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.16( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.9( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.8( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.a( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.b( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.f( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.3( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.15( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018139 4 0.000157
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.15( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.15( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.15( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.1b( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018029 4 0.000072
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.1b( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.1b( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.13( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018053 4 0.000185
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.1b( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.13( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.6( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.13( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000021 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.13( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1f( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.2( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.5( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.3( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.2( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.f( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1c( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.6( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.4( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.18( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.9( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.7( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.3( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1d( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.c( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.f( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1b( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.1f( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.18( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.4( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.1e( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.19( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.16( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018351 4 0.000061
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.16( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.16( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.9( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018300 4 0.000057
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.16( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.9( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.9( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.8( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018286 4 0.000057
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.9( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.8( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.8( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.8( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.a( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018265 4 0.000048
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.a( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.a( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.b( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018255 4 0.000044
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.f( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018229 4 0.000043
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.a( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.f( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.b( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.f( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.b( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.b( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.f( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.3( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018170 4 0.000053
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.3( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.3( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.3( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.6( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017964 4 0.000090
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.6( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.6( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.6( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.2( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017758 4 0.000065
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.5( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017747 4 0.000060
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.5( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.2( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.5( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.5( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.2( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1f( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017885 4 0.000063
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.2( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1f( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.3( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017835 4 0.000062
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1f( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.3( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1f( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.3( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.3( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.2( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017754 4 0.000047
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.2( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.f( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017622 4 0.000071
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.2( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.2( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.f( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.6( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017537 4 0.000057
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.6( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.6( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.f( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.6( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.f( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1c( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017590 4 0.000079
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1c( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1c( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1c( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.18( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017357 4 0.000064
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.18( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.18( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.9( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017303 4 0.000234
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.18( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.4( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017363 4 0.000161
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.4( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.9( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.4( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.7( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017251 4 0.000067
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.4( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.7( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.9( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000026 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.7( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.7( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.9( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017210 4 0.000092
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.3( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017179 4 0.000062
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.3( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.3( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.3( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1d( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017251 4 0.000080
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1d( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1d( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.1d( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.c( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017169 4 0.000259
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.c( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.f( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017191 4 0.000085
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.c( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.c( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1b( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017148 4 0.000070
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1b( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.1f( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017056 4 0.000155
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.1f( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1b( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.1f( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.1f( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.1b( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.18( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017056 4 0.000078
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.18( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.18( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.18( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.4( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016996 4 0.000451
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.4( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.4( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[7.4( empty local-lis/les=45/46 n=0 ec=43/29 lis/c=45/43 les/c/f=46/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.1e( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016964 4 0.000088
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.1e( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.1e( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[5.1e( empty local-lis/les=45/46 n=0 ec=41/25 lis/c=45/41 les/c/f=46/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.19( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016979 4 0.000063
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.19( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.f( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.19( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[2.19( empty local-lis/les=45/46 n=0 ec=39/19 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.f( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000174 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.f( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.17( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019338 4 0.000123
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.17( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.17( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000027 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[3.17( empty local-lis/les=45/46 n=0 ec=39/21 lis/c=45/39 les/c/f=46/40/0 sis=45) [0] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019892 7 0.000060
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018937 7 0.000087
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021792 7 0.000078
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020245 7 0.000059
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019653 7 0.000067
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018782 7 0.000096
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018832 7 0.000057
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020777 7 0.000056
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021455 7 0.000057
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018316 7 0.000074
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021068 7 0.000080
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021722 7 0.000078
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020241 7 0.000055
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000255 1 0.000095
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018392 7 0.000084
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021040 7 0.000064
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000300 1 0.000026
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.18( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000376 1 0.000045
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000408 1 0.000022
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.e( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000438 1 0.000042
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000463 1 0.000022
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000471 1 0.000092
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.a( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000547 1 0.000044
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.11( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000580 1 0.000056
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000578 1 0.000026
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.13( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000606 1 0.000026
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000645 1 0.000019
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000715 1 0.000096
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000510 1 0.000334
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000764 1 0.000247
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021869 7 0.000069
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000068 1 0.000045
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1b( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025278 7 0.000066
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025115 7 0.000045
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023477 7 0.000043
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025511 7 0.000065
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.024465 7 0.000058
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.024453 7 0.000084
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000088 1 0.000049
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023810 7 0.000044
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000135 1 0.000067
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.d( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.024683 7 0.000062
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023778 7 0.000045
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.024170 7 0.000061
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025206 7 0.000037
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025871 7 0.000044
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.024547 7 0.000061
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.024420 7 0.000061
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.024634 7 0.000059
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000273 1 0.000018
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000441 1 0.000072
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000488 1 0.000027
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.4( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000465 1 0.000026
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000456 1 0.000032
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.7( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000548 1 0.000058
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000544 1 0.000017
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000593 1 0.000028
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.5( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000613 1 0.000021
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.f( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000642 1 0.000020
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000697 1 0.000016
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000972 1 0.000259
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000778 1 0.000020
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.9( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036101 7 0.000081
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034726 7 0.000113
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034680 7 0.000047
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033365 7 0.000051
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036275 7 0.000092
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033354 7 0.000073
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035829 7 0.000056
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033258 7 0.000056
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000238 1 0.000196
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000231 1 0.000026
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000285 1 0.000021
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.10( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000753 1 0.000020
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001664 1 0.000290
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001686 1 0.000777
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.8( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001755 1 0.000641
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001884 1 0.000757
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.12( empty local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.f( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.017372 1 0.000066
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.f( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.017680 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.f( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.037626 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1a( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.023759 1 0.000036
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1a( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.024187 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1a( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.043160 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.18( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.031298 1 0.000054
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.18( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.031654 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.18( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.053481 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.e( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038809 1 0.000024
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.e( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.039256 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.e( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.059534 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.046003 1 0.000025
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.046500 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.066189 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.8( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.053110 1 0.000050
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.8( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.053616 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.8( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.072425 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.a( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.062034 1 0.000041
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.a( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.062628 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.a( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.081551 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.14( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.067691 1 0.000023
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.14( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.068307 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.14( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.089816 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.11( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.075233 1 0.000046
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.11( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.075830 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.11( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.096650 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:37.667100+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.13( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.082587 1 0.000020
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.13( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.083205 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.13( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.104313 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.15( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.090112 1 0.000040
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.15( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.090762 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.15( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.112511 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.13( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.097210 1 0.000064
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.13( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.097898 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.13( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.118169 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1c( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.104668 1 0.000045
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1c( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.105476 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1c( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.123839 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.11( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.112365 1 0.000020
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.11( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.112939 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.11( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.134322 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1f( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.120066 1 0.000022
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1f( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.120887 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1f( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.139312 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1b( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.124768 1 0.000039
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1b( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.124892 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1b( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.146810 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.c( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.129868 1 0.000039
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.c( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.129988 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.c( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.155310 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.d( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.137554 1 0.000064
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.d( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.137739 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.d( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.162904 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.d( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.144607 1 0.000096
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.d( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.144955 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.d( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.170505 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1e( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.151753 1 0.000033
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1e( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.152234 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1e( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.175759 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.4( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.159058 1 0.000034
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.4( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.159591 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.4( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.184096 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.4( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.166631 1 0.000049
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.4( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.167143 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.4( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.190986 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.7( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.174063 1 0.000031
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.7( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.175230 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.7( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.199053 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.181392 1 0.000041
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.181999 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.206717 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.188636 1 0.000043
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.189220 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.214449 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.5( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.195886 1 0.000028
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.5( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.196519 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.5( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.220727 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.f( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.203401 1 0.000038
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.f( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.204057 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.f( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.229959 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.b( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.210663 1 0.000047
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.b( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.211347 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.b( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.235917 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.2( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.218056 1 0.000042
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.2( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.218794 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.2( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.243240 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.6( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.225491 1 0.000032
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.6( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.226520 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.6( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.251025 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.9( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.232647 1 0.000024
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.9( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.233464 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.9( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.258130 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.17( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.231182 1 0.000069
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.17( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.231468 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.17( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.267657 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1c( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.238324 1 0.000054
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1c( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.238605 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1c( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.271996 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1d( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.245167 1 0.000042
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1d( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.245979 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1d( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.279261 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.10( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.253162 1 0.000574
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.10( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.253513 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.10( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.289376 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.2( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.259078 1 0.000058
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.2( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.260786 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.2( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.295557 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.8( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.266567 1 0.000062
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.8( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.268294 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.8( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.301838 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.14( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.273814 1 0.000025
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.14( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.275625 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.14( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.311961 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.12( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.281210 1 0.000035
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.12( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.283145 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.12( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.317887 0 0.000000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 1056768 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe154000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:38.667286+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 1040384 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:39.667429+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 1032192 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe154000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:40.667669+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.277514458s of 12.600845337s, submitted: 606
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 1024000 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:41.667805+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 5 sent 3 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:10.726313+0000 osd.0 (osd.0) 4 : cluster [DBG] 4.17 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:10.736734+0000 osd.0 (osd.0) 5 : cluster [DBG] 4.17 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 5)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:10.726313+0000 osd.0 (osd.0) 4 : cluster [DBG] 4.17 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:10.736734+0000 osd.0 (osd.0) 5 : cluster [DBG] 4.17 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 1024000 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 344839 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:42.668014+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 1024000 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:43.668126+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 1015808 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:44.668460+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 7 sent 5 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:13.698477+0000 osd.0 (osd.0) 6 : cluster [DBG] 6.1a scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:13.709036+0000 osd.0 (osd.0) 7 : cluster [DBG] 6.1a scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 983040 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 7)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:13.698477+0000 osd.0 (osd.0) 6 : cluster [DBG] 6.1a scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:13.709036+0000 osd.0 (osd.0) 7 : cluster [DBG] 6.1a scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:45.668752+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 983040 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:46.668888+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 9 sent 7 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:16.642252+0000 osd.0 (osd.0) 8 : cluster [DBG] 4.16 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:16.652077+0000 osd.0 (osd.0) 9 : cluster [DBG] 4.16 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 983040 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 349665 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 9)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:16.642252+0000 osd.0 (osd.0) 8 : cluster [DBG] 4.16 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:16.652077+0000 osd.0 (osd.0) 9 : cluster [DBG] 4.16 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:47.669227+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 983040 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:48.669399+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 974848 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:49.669570+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 11 sent 9 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:19.590183+0000 osd.0 (osd.0) 10 : cluster [DBG] 4.15 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:19.600633+0000 osd.0 (osd.0) 11 : cluster [DBG] 4.15 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61923328 unmapped: 901120 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 11)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:19.590183+0000 osd.0 (osd.0) 10 : cluster [DBG] 4.15 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:19.600633+0000 osd.0 (osd.0) 11 : cluster [DBG] 4.15 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:50.669852+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61923328 unmapped: 901120 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.550309181s of 10.874446869s, submitted: 8
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:51.670108+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 13 sent 11 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:21.600690+0000 osd.0 (osd.0) 12 : cluster [DBG] 6.16 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:21.611242+0000 osd.0 (osd.0) 13 : cluster [DBG] 6.16 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61931520 unmapped: 892928 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 354491 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:52.670646+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 4 last_log 15 sent 13 num 4 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:22.639979+0000 osd.0 (osd.0) 14 : cluster [DBG] 6.10 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:22.664707+0000 osd.0 (osd.0) 15 : cluster [DBG] 6.10 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 13)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:21.600690+0000 osd.0 (osd.0) 12 : cluster [DBG] 6.16 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:21.611242+0000 osd.0 (osd.0) 13 : cluster [DBG] 6.16 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 15)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:22.639979+0000 osd.0 (osd.0) 14 : cluster [DBG] 6.10 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:22.664707+0000 osd.0 (osd.0) 15 : cluster [DBG] 6.10 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61956096 unmapped: 868352 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:53.671673+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 876544 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:54.671855+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 17 sent 15 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:24.614775+0000 osd.0 (osd.0) 16 : cluster [DBG] 6.12 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:24.625397+0000 osd.0 (osd.0) 17 : cluster [DBG] 6.12 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 17)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:24.614775+0000 osd.0 (osd.0) 16 : cluster [DBG] 6.12 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:24.625397+0000 osd.0 (osd.0) 17 : cluster [DBG] 6.12 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 876544 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:55.672070+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 876544 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:56.672258+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 876544 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 359317 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:57.672821+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61956096 unmapped: 868352 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:58.673089+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61956096 unmapped: 868352 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:59.673294+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61898752 unmapped: 925696 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:00.673659+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 19 sent 17 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:30.643210+0000 osd.0 (osd.0) 18 : cluster [DBG] 4.c scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:30.653637+0000 osd.0 (osd.0) 19 : cluster [DBG] 4.c scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 909312 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 19)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:30.643210+0000 osd.0 (osd.0) 18 : cluster [DBG] 4.c scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:30.653637+0000 osd.0 (osd.0) 19 : cluster [DBG] 4.c scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:01.673860+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61923328 unmapped: 901120 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 361728 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:02.674074+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.969016075s of 11.074629784s, submitted: 8
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 876544 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:03.674263+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 21 sent 19 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:32.675394+0000 osd.0 (osd.0) 20 : cluster [DBG] 4.0 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:32.685980+0000 osd.0 (osd.0) 21 : cluster [DBG] 4.0 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 876544 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 21)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:32.675394+0000 osd.0 (osd.0) 20 : cluster [DBG] 4.0 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:32.685980+0000 osd.0 (osd.0) 21 : cluster [DBG] 4.0 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:04.674563+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 860160 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:05.674791+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61972480 unmapped: 851968 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:06.674973+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61972480 unmapped: 851968 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 364139 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:07.675170+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 23 sent 21 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:37.651377+0000 osd.0 (osd.0) 22 : cluster [DBG] 6.0 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:37.661822+0000 osd.0 (osd.0) 23 : cluster [DBG] 6.0 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 23)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:37.651377+0000 osd.0 (osd.0) 22 : cluster [DBG] 6.0 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:37.661822+0000 osd.0 (osd.0) 23 : cluster [DBG] 6.0 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 827392 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:08.675469+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 827392 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:09.675696+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62029824 unmapped: 794624 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:10.675966+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62038016 unmapped: 786432 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:11.676115+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62038016 unmapped: 786432 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 366550 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:12.676401+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.968639374s of 10.029688835s, submitted: 4
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 778240 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:13.676579+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 25 sent 23 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:42.705049+0000 osd.0 (osd.0) 24 : cluster [DBG] 6.3 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:42.715683+0000 osd.0 (osd.0) 25 : cluster [DBG] 6.3 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 25)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:42.705049+0000 osd.0 (osd.0) 24 : cluster [DBG] 6.3 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:42.715683+0000 osd.0 (osd.0) 25 : cluster [DBG] 6.3 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 778240 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:14.676819+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:43.684902+0000 osd.0 (osd.0) 26 : cluster [DBG] 4.3 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:43.695405+0000 osd.0 (osd.0) 27 : cluster [DBG] 4.3 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 27)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:43.684902+0000 osd.0 (osd.0) 26 : cluster [DBG] 4.3 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:43.695405+0000 osd.0 (osd.0) 27 : cluster [DBG] 4.3 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 778240 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:15.677176+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62054400 unmapped: 770048 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:16.677304+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62054400 unmapped: 770048 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 371372 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:17.677484+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:47.608243+0000 osd.0 (osd.0) 28 : cluster [DBG] 6.1b scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:47.618892+0000 osd.0 (osd.0) 29 : cluster [DBG] 6.1b scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62070784 unmapped: 753664 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 29)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:47.608243+0000 osd.0 (osd.0) 28 : cluster [DBG] 6.1b scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:47.618892+0000 osd.0 (osd.0) 29 : cluster [DBG] 6.1b scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:18.677725+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:48.652664+0000 osd.0 (osd.0) 30 : cluster [DBG] 4.19 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:48.663384+0000 osd.0 (osd.0) 31 : cluster [DBG] 4.19 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62103552 unmapped: 720896 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:19.677940+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 31)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:48.652664+0000 osd.0 (osd.0) 30 : cluster [DBG] 4.19 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:48.663384+0000 osd.0 (osd.0) 31 : cluster [DBG] 4.19 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62103552 unmapped: 720896 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:20.678117+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62111744 unmapped: 712704 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:21.678410+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:51.621335+0000 osd.0 (osd.0) 32 : cluster [DBG] 6.18 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:51.631704+0000 osd.0 (osd.0) 33 : cluster [DBG] 6.18 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 33)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:51.621335+0000 osd.0 (osd.0) 32 : cluster [DBG] 6.18 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:51.631704+0000 osd.0 (osd.0) 33 : cluster [DBG] 6.18 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 378611 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62119936 unmapped: 704512 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:22.678662+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62128128 unmapped: 696320 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.595091820s of 10.912874222s, submitted: 10
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:23.678862+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:53.618064+0000 osd.0 (osd.0) 34 : cluster [DBG] 6.7 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:53.632079+0000 osd.0 (osd.0) 35 : cluster [DBG] 6.7 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 35)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:53.618064+0000 osd.0 (osd.0) 34 : cluster [DBG] 6.7 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:53.632079+0000 osd.0 (osd.0) 35 : cluster [DBG] 6.7 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62160896 unmapped: 663552 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:24.679242+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62160896 unmapped: 663552 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:25.679415+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62177280 unmapped: 647168 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:26.679647+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 381022 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 638976 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:27.679907+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 630784 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:28.680038+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 630784 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:29.680215+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 630784 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:30.680413+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62201856 unmapped: 622592 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:31.681012+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 381022 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62201856 unmapped: 622592 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:32.681196+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 606208 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:33.681569+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:02.705817+0000 osd.0 (osd.0) 36 : cluster [DBG] 6.19 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:02.716416+0000 osd.0 (osd.0) 37 : cluster [DBG] 6.19 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 37)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:02.705817+0000 osd.0 (osd.0) 36 : cluster [DBG] 6.19 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:02.716416+0000 osd.0 (osd.0) 37 : cluster [DBG] 6.19 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 606208 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:34.682311+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 606208 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:35.682483+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.091015816s of 12.123208046s, submitted: 4
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62242816 unmapped: 581632 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:36.682643+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:05.741338+0000 osd.0 (osd.0) 38 : cluster [DBG] 4.b scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:05.751970+0000 osd.0 (osd.0) 39 : cluster [DBG] 4.b scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 385846 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62242816 unmapped: 581632 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 39)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:05.741338+0000 osd.0 (osd.0) 38 : cluster [DBG] 4.b scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:05.751970+0000 osd.0 (osd.0) 39 : cluster [DBG] 4.b scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:37.683112+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62251008 unmapped: 573440 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:38.683264+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:07.782524+0000 osd.0 (osd.0) 40 : cluster [DBG] 6.9 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:07.793034+0000 osd.0 (osd.0) 41 : cluster [DBG] 6.9 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 540672 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:39.683426+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 41)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:07.782524+0000 osd.0 (osd.0) 40 : cluster [DBG] 6.9 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:07.793034+0000 osd.0 (osd.0) 41 : cluster [DBG] 6.9 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 532480 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:40.683572+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:09.747018+0000 osd.0 (osd.0) 42 : cluster [DBG] 6.5 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:09.757549+0000 osd.0 (osd.0) 43 : cluster [DBG] 6.5 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 43)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:09.747018+0000 osd.0 (osd.0) 42 : cluster [DBG] 6.5 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:09.757549+0000 osd.0 (osd.0) 43 : cluster [DBG] 6.5 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 516096 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:41.683795+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 393079 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 507904 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:42.683955+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:11.797076+0000 osd.0 (osd.0) 44 : cluster [DBG] 6.a scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:11.807595+0000 osd.0 (osd.0) 45 : cluster [DBG] 6.a scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 45)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:11.797076+0000 osd.0 (osd.0) 44 : cluster [DBG] 6.a scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:11.807595+0000 osd.0 (osd.0) 45 : cluster [DBG] 6.a scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 499712 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:43.684154+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 499712 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:44.684285+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 1540096 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:45.684409+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:14.835070+0000 osd.0 (osd.0) 46 : cluster [DBG] 4.1d scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:14.845624+0000 osd.0 (osd.0) 47 : cluster [DBG] 4.1d scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 47)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:14.835070+0000 osd.0 (osd.0) 46 : cluster [DBG] 4.1d scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:14.845624+0000 osd.0 (osd.0) 47 : cluster [DBG] 4.1d scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62341120 unmapped: 1531904 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:46.684592+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.695914268s of 11.070383072s, submitted: 10
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 397905 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62357504 unmapped: 1515520 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:47.684822+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:16.811785+0000 osd.0 (osd.0) 48 : cluster [DBG] 4.1e scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:16.822264+0000 osd.0 (osd.0) 49 : cluster [DBG] 4.1e scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 49)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:16.811785+0000 osd.0 (osd.0) 48 : cluster [DBG] 4.1e scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:16.822264+0000 osd.0 (osd.0) 49 : cluster [DBG] 4.1e scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 1507328 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:48.685017+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 1507328 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:49.685168+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:18.821809+0000 osd.0 (osd.0) 50 : cluster [DBG] 4.1f scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:18.832367+0000 osd.0 (osd.0) 51 : cluster [DBG] 4.1f scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 51)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:18.821809+0000 osd.0 (osd.0) 50 : cluster [DBG] 4.1f scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:18.832367+0000 osd.0 (osd.0) 51 : cluster [DBG] 4.1f scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 1507328 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:50.685459+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 1499136 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:51.685684+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 400318 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 1499136 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:52.685831+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 1499136 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:53.685981+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:22.753220+0000 osd.0 (osd.0) 52 : cluster [DBG] 4.6 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:22.762738+0000 osd.0 (osd.0) 53 : cluster [DBG] 4.6 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 53)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:22.753220+0000 osd.0 (osd.0) 52 : cluster [DBG] 4.6 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:22.762738+0000 osd.0 (osd.0) 53 : cluster [DBG] 4.6 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 1490944 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:54.686187+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 1490944 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:55.686394+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 1482752 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:56.686666+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 402729 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 1482752 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.841011047s of 10.853298187s, submitted: 6
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:57.686800+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:27.665099+0000 osd.0 (osd.0) 54 : cluster [DBG] 3.12 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:27.675303+0000 osd.0 (osd.0) 55 : cluster [DBG] 3.12 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 55)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:27.665099+0000 osd.0 (osd.0) 54 : cluster [DBG] 3.12 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:27.675303+0000 osd.0 (osd.0) 55 : cluster [DBG] 3.12 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62406656 unmapped: 1466368 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:58.687099+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 1458176 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:59.687314+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 1458176 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:00.687534+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62423040 unmapped: 1449984 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:01.687762+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 407555 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62423040 unmapped: 1449984 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:02.687938+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:31.722587+0000 osd.0 (osd.0) 56 : cluster [DBG] 2.11 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:31.732507+0000 osd.0 (osd.0) 57 : cluster [DBG] 2.11 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 1433600 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 57)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:31.722587+0000 osd.0 (osd.0) 56 : cluster [DBG] 2.11 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:31.732507+0000 osd.0 (osd.0) 57 : cluster [DBG] 2.11 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:03.688190+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:32.689571+0000 osd.0 (osd.0) 58 : cluster [DBG] 5.14 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:32.700087+0000 osd.0 (osd.0) 59 : cluster [DBG] 5.14 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 1425408 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 59)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:32.689571+0000 osd.0 (osd.0) 58 : cluster [DBG] 5.14 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:32.700087+0000 osd.0 (osd.0) 59 : cluster [DBG] 5.14 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:04.688424+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 1425408 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:05.688581+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 1417216 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:06.688720+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 412381 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 1417216 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:07.688880+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:36.692314+0000 osd.0 (osd.0) 60 : cluster [DBG] 2.13 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:36.702768+0000 osd.0 (osd.0) 61 : cluster [DBG] 2.13 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.713692665s of 10.046788216s, submitted: 8
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 61)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:36.692314+0000 osd.0 (osd.0) 60 : cluster [DBG] 2.13 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:36.702768+0000 osd.0 (osd.0) 61 : cluster [DBG] 2.13 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62472192 unmapped: 1400832 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:08.689109+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:37.711887+0000 osd.0 (osd.0) 62 : cluster [DBG] 5.15 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:37.722390+0000 osd.0 (osd.0) 63 : cluster [DBG] 5.15 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 63)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:37.711887+0000 osd.0 (osd.0) 62 : cluster [DBG] 5.15 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:37.722390+0000 osd.0 (osd.0) 63 : cluster [DBG] 5.15 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 1392640 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:09.689336+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 1392640 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:10.689644+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:39.714631+0000 osd.0 (osd.0) 64 : cluster [DBG] 3.15 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:39.724682+0000 osd.0 (osd.0) 65 : cluster [DBG] 3.15 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 65)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:39.714631+0000 osd.0 (osd.0) 64 : cluster [DBG] 3.15 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:39.724682+0000 osd.0 (osd.0) 65 : cluster [DBG] 3.15 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62488576 unmapped: 1384448 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:11.689897+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 1 last_log 66 sent 65 num 1 unsent 1 sending 1
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:41.681559+0000 osd.0 (osd.0) 66 : cluster [DBG] 7.1b scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 419620 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62504960 unmapped: 1368064 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 66)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:41.681559+0000 osd.0 (osd.0) 66 : cluster [DBG] 7.1b scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:12.690363+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 1 last_log 67 sent 66 num 1 unsent 1 sending 1
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:41.692056+0000 osd.0 (osd.0) 67 : cluster [DBG] 7.1b scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62504960 unmapped: 1368064 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 67)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:41.692056+0000 osd.0 (osd.0) 67 : cluster [DBG] 7.1b scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:13.690633+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 1 last_log 68 sent 67 num 1 unsent 1 sending 1
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:43.680601+0000 osd.0 (osd.0) 68 : cluster [DBG] 7.13 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 1327104 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 68)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:43.680601+0000 osd.0 (osd.0) 68 : cluster [DBG] 7.13 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:14.690848+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 1 last_log 69 sent 68 num 1 unsent 1 sending 1
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:43.691184+0000 osd.0 (osd.0) 69 : cluster [DBG] 7.13 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 1327104 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 69)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:43.691184+0000 osd.0 (osd.0) 69 : cluster [DBG] 7.13 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:15.691109+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 1318912 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:16.691331+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 422033 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 1318912 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:17.691571+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 1318912 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:18.691702+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 1310720 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:19.691890+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 1310720 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:20.692209+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 1302528 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:21.692379+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 422033 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 1302528 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:22.692551+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 1294336 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:23.692782+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 1286144 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:24.693014+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62595072 unmapped: 1277952 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.342010498s of 17.853366852s, submitted: 8
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:25.693230+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:55.565374+0000 osd.0 (osd.0) 70 : cluster [DBG] 2.16 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:55.575706+0000 osd.0 (osd.0) 71 : cluster [DBG] 2.16 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 71)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:55.565374+0000 osd.0 (osd.0) 70 : cluster [DBG] 2.16 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:55.575706+0000 osd.0 (osd.0) 71 : cluster [DBG] 2.16 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 1269760 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:26.693500+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:56.611736+0000 osd.0 (osd.0) 72 : cluster [DBG] 3.9 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:56.622314+0000 osd.0 (osd.0) 73 : cluster [DBG] 3.9 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 73)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:56.611736+0000 osd.0 (osd.0) 72 : cluster [DBG] 3.9 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:56.622314+0000 osd.0 (osd.0) 73 : cluster [DBG] 3.9 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 426857 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 1269760 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:27.694526+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 1269760 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:28.695289+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:58.613524+0000 osd.0 (osd.0) 74 : cluster [DBG] 2.8 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:58.624069+0000 osd.0 (osd.0) 75 : cluster [DBG] 2.8 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 75)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:58.613524+0000 osd.0 (osd.0) 74 : cluster [DBG] 2.8 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:58.624069+0000 osd.0 (osd.0) 75 : cluster [DBG] 2.8 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 1228800 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:29.696035+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 1228800 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:30.696647+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:00.594778+0000 osd.0 (osd.0) 76 : cluster [DBG] 3.a scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:00.605458+0000 osd.0 (osd.0) 77 : cluster [DBG] 3.a scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 77)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:00.594778+0000 osd.0 (osd.0) 76 : cluster [DBG] 3.a scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:00.605458+0000 osd.0 (osd.0) 77 : cluster [DBG] 3.a scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 1204224 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:31.696827+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:01.623503+0000 osd.0 (osd.0) 78 : cluster [DBG] 2.b scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:01.634143+0000 osd.0 (osd.0) 79 : cluster [DBG] 2.b scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 79)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:01.623503+0000 osd.0 (osd.0) 78 : cluster [DBG] 2.b scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:01.634143+0000 osd.0 (osd.0) 79 : cluster [DBG] 2.b scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 434090 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 1204224 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:32.697070+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 1196032 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:33.697452+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 1196032 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:34.697774+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 1196032 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.066798210s of 10.089524269s, submitted: 10
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:35.697965+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:05.654967+0000 osd.0 (osd.0) 80 : cluster [DBG] 7.f scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:05.665520+0000 osd.0 (osd.0) 81 : cluster [DBG] 7.f scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 81)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:05.654967+0000 osd.0 (osd.0) 80 : cluster [DBG] 7.f scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:05.665520+0000 osd.0 (osd.0) 81 : cluster [DBG] 7.f scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62693376 unmapped: 1179648 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:36.698242+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:06.613011+0000 osd.0 (osd.0) 82 : cluster [DBG] 5.3 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:06.623202+0000 osd.0 (osd.0) 83 : cluster [DBG] 5.3 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 83)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:06.613011+0000 osd.0 (osd.0) 82 : cluster [DBG] 5.3 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:06.623202+0000 osd.0 (osd.0) 83 : cluster [DBG] 5.3 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 438912 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 1163264 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:37.698493+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:07.589698+0000 osd.0 (osd.0) 84 : cluster [DBG] 3.6 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:07.600265+0000 osd.0 (osd.0) 85 : cluster [DBG] 3.6 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 85)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:07.589698+0000 osd.0 (osd.0) 84 : cluster [DBG] 3.6 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:07.600265+0000 osd.0 (osd.0) 85 : cluster [DBG] 3.6 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 1155072 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:38.698747+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 1155072 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:39.698967+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 1155072 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:40.699218+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62726144 unmapped: 1146880 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:41.699383+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 441323 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62726144 unmapped: 1146880 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:42.699589+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62734336 unmapped: 1138688 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:43.699733+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62734336 unmapped: 1138688 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:44.699864+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 1130496 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:45.700069+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:15.526651+0000 osd.0 (osd.0) 86 : cluster [DBG] 5.5 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:15.537266+0000 osd.0 (osd.0) 87 : cluster [DBG] 5.5 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 87)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:15.526651+0000 osd.0 (osd.0) 86 : cluster [DBG] 5.5 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:15.537266+0000 osd.0 (osd.0) 87 : cluster [DBG] 5.5 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.836798668s of 10.853741646s, submitted: 8
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 1122304 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:46.700389+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:16.508694+0000 osd.0 (osd.0) 88 : cluster [DBG] 5.2 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:16.519266+0000 osd.0 (osd.0) 89 : cluster [DBG] 5.2 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 89)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:16.508694+0000 osd.0 (osd.0) 88 : cluster [DBG] 5.2 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:16.519266+0000 osd.0 (osd.0) 89 : cluster [DBG] 5.2 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 448558 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:47.700540+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:17.523024+0000 osd.0 (osd.0) 90 : cluster [DBG] 2.1f scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:17.533562+0000 osd.0 (osd.0) 91 : cluster [DBG] 2.1f scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 1114112 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 91)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:17.523024+0000 osd.0 (osd.0) 90 : cluster [DBG] 2.1f scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:17.533562+0000 osd.0 (osd.0) 91 : cluster [DBG] 2.1f scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:48.700777+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 1105920 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:49.701348+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:19.499733+0000 osd.0 (osd.0) 92 : cluster [DBG] 3.3 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:19.510414+0000 osd.0 (osd.0) 93 : cluster [DBG] 3.3 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 1089536 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 93)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:19.499733+0000 osd.0 (osd.0) 92 : cluster [DBG] 3.3 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:19.510414+0000 osd.0 (osd.0) 93 : cluster [DBG] 3.3 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:50.701577+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:20.509245+0000 osd.0 (osd.0) 94 : cluster [DBG] 2.2 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:20.519683+0000 osd.0 (osd.0) 95 : cluster [DBG] 2.2 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1073152 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 95)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:20.509245+0000 osd.0 (osd.0) 94 : cluster [DBG] 2.2 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:20.519683+0000 osd.0 (osd.0) 95 : cluster [DBG] 2.2 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:51.702016+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:21.486570+0000 osd.0 (osd.0) 96 : cluster [DBG] 2.f scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:21.496997+0000 osd.0 (osd.0) 97 : cluster [DBG] 2.f scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1056768 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 97)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:21.486570+0000 osd.0 (osd.0) 96 : cluster [DBG] 2.f scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:21.496997+0000 osd.0 (osd.0) 97 : cluster [DBG] 2.f scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 458202 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:52.702719+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:22.525232+0000 osd.0 (osd.0) 98 : cluster [DBG] 7.6 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:22.535213+0000 osd.0 (osd.0) 99 : cluster [DBG] 7.6 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 1048576 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 99)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:22.525232+0000 osd.0 (osd.0) 98 : cluster [DBG] 7.6 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:22.535213+0000 osd.0 (osd.0) 99 : cluster [DBG] 7.6 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:53.702969+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 1048576 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:54.703122+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:24.525868+0000 osd.0 (osd.0) 100 : cluster [DBG] 2.1c scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:24.536642+0000 osd.0 (osd.0) 101 : cluster [DBG] 2.1c scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 1024000 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 101)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:24.525868+0000 osd.0 (osd.0) 100 : cluster [DBG] 2.1c scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:24.536642+0000 osd.0 (osd.0) 101 : cluster [DBG] 2.1c scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:55.703360+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 1024000 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:56.703523+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 1024000 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.038551331s of 11.073991776s, submitted: 14
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 463028 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:57.703667+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:27.582697+0000 osd.0 (osd.0) 102 : cluster [DBG] 7.18 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:27.592843+0000 osd.0 (osd.0) 103 : cluster [DBG] 7.18 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 1007616 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 103)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:27.582697+0000 osd.0 (osd.0) 102 : cluster [DBG] 7.18 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:27.592843+0000 osd.0 (osd.0) 103 : cluster [DBG] 7.18 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:58.704225+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 999424 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:59.704854+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 999424 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:00.705072+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 999424 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:01.705292+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:31.631444+0000 osd.0 (osd.0) 104 : cluster [DBG] 5.4 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:31.642175+0000 osd.0 (osd.0) 105 : cluster [DBG] 5.4 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 983040 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 105)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:31.631444+0000 osd.0 (osd.0) 104 : cluster [DBG] 5.4 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:31.642175+0000 osd.0 (osd.0) 105 : cluster [DBG] 5.4 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 465439 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:02.705564+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 983040 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:03.705739+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 974848 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:04.706401+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:34.594307+0000 osd.0 (osd.0) 106 : cluster [DBG] 5.7 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:34.604737+0000 osd.0 (osd.0) 107 : cluster [DBG] 5.7 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 958464 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 107)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:34.594307+0000 osd.0 (osd.0) 106 : cluster [DBG] 5.7 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:34.604737+0000 osd.0 (osd.0) 107 : cluster [DBG] 5.7 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:05.706672+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:35.627478+0000 osd.0 (osd.0) 108 : cluster [DBG] 7.9 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:35.638085+0000 osd.0 (osd.0) 109 : cluster [DBG] 7.9 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 942080 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 109)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:35.627478+0000 osd.0 (osd.0) 108 : cluster [DBG] 7.9 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:35.638085+0000 osd.0 (osd.0) 109 : cluster [DBG] 7.9 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:06.707265+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:36.648220+0000 osd.0 (osd.0) 110 : cluster [DBG] 3.1 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:36.658744+0000 osd.0 (osd.0) 111 : cluster [DBG] 3.1 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 917504 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 111)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:36.648220+0000 osd.0 (osd.0) 110 : cluster [DBG] 3.1 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:36.658744+0000 osd.0 (osd.0) 111 : cluster [DBG] 3.1 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 472672 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:07.708059+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 917504 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.989176750s of 11.068286896s, submitted: 10
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:08.708282+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:38.651017+0000 osd.0 (osd.0) 112 : cluster [DBG] 7.3 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:38.661599+0000 osd.0 (osd.0) 113 : cluster [DBG] 7.3 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 909312 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 113)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:38.651017+0000 osd.0 (osd.0) 112 : cluster [DBG] 7.3 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:38.661599+0000 osd.0 (osd.0) 113 : cluster [DBG] 7.3 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:09.708854+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 884736 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:10.709090+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:40.678674+0000 osd.0 (osd.0) 114 : cluster [DBG] 2.1d scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:40.689235+0000 osd.0 (osd.0) 115 : cluster [DBG] 2.1d scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 868352 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 115)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:40.678674+0000 osd.0 (osd.0) 114 : cluster [DBG] 2.1d scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:40.689235+0000 osd.0 (osd.0) 115 : cluster [DBG] 2.1d scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:11.709951+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 860160 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:12.710380+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 477496 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 860160 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:13.710755+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 843776 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:14.711104+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 4 last_log 119 sent 115 num 4 unsent 4 sending 4
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:43.723918+0000 osd.0 (osd.0) 116 : cluster [DBG] 3.c scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:43.734673+0000 osd.0 (osd.0) 117 : cluster [DBG] 3.c scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:44.677672+0000 osd.0 (osd.0) 118 : cluster [DBG] 7.1f scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:44.688248+0000 osd.0 (osd.0) 119 : cluster [DBG] 7.1f scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 827392 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 119)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:43.723918+0000 osd.0 (osd.0) 116 : cluster [DBG] 3.c scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:43.734673+0000 osd.0 (osd.0) 117 : cluster [DBG] 3.c scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:44.677672+0000 osd.0 (osd.0) 118 : cluster [DBG] 7.1f scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:44.688248+0000 osd.0 (osd.0) 119 : cluster [DBG] 7.1f scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:15.711401+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 827392 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:16.711609+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 827392 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:17.712128+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 482320 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 819200 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.023255348s of 10.042103767s, submitted: 8
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:18.712305+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:48.693242+0000 osd.0 (osd.0) 120 : cluster [DBG] 3.1b scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:48.703955+0000 osd.0 (osd.0) 121 : cluster [DBG] 3.1b scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 819200 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 121)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:48.693242+0000 osd.0 (osd.0) 120 : cluster [DBG] 3.1b scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:48.703955+0000 osd.0 (osd.0) 121 : cluster [DBG] 3.1b scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:19.712753+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 802816 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:20.713211+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 794624 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:21.713382+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:51.652897+0000 osd.0 (osd.0) 122 : cluster [DBG] 2.18 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:51.663463+0000 osd.0 (osd.0) 123 : cluster [DBG] 2.18 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 770048 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 123)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:51.652897+0000 osd.0 (osd.0) 122 : cluster [DBG] 2.18 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:51.663463+0000 osd.0 (osd.0) 123 : cluster [DBG] 2.18 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:22.713661+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:52.665552+0000 osd.0 (osd.0) 124 : cluster [DBG] 7.4 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:52.676049+0000 osd.0 (osd.0) 125 : cluster [DBG] 7.4 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 489557 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 761856 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 125)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:52.665552+0000 osd.0 (osd.0) 124 : cluster [DBG] 7.4 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:52.676049+0000 osd.0 (osd.0) 125 : cluster [DBG] 7.4 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:23.714000+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 761856 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:24.714166+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 753664 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:25.714494+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 753664 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:26.714763+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:55.724963+0000 osd.0 (osd.0) 126 : cluster [DBG] 5.1e scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:55.735509+0000 osd.0 (osd.0) 127 : cluster [DBG] 5.1e scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 745472 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 127)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:55.724963+0000 osd.0 (osd.0) 126 : cluster [DBG] 5.1e scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:55.735509+0000 osd.0 (osd.0) 127 : cluster [DBG] 5.1e scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:27.715040+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 4 last_log 131 sent 127 num 4 unsent 4 sending 4
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:56.720704+0000 osd.0 (osd.0) 128 : cluster [DBG] 2.19 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:56.731161+0000 osd.0 (osd.0) 129 : cluster [DBG] 2.19 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:57.685567+0000 osd.0 (osd.0) 130 : cluster [DBG] 3.f scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:57.696265+0000 osd.0 (osd.0) 131 : cluster [DBG] 3.f scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 496794 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 737280 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 131)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:56.720704+0000 osd.0 (osd.0) 128 : cluster [DBG] 2.19 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:56.731161+0000 osd.0 (osd.0) 129 : cluster [DBG] 2.19 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:57.685567+0000 osd.0 (osd.0) 130 : cluster [DBG] 3.f scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:57.696265+0000 osd.0 (osd.0) 131 : cluster [DBG] 3.f scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:28.715287+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 729088 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:29.715441+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 704512 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:30.715704+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 704512 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:31.715991+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 696320 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:32.716793+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 496794 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 696320 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:33.717279+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.012383461s of 15.082575798s, submitted: 12
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 696320 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:34.717777+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:17:03.775798+0000 osd.0 (osd.0) 132 : cluster [DBG] 3.17 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:17:03.786385+0000 osd.0 (osd.0) 133 : cluster [DBG] 3.17 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 688128 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 133)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:17:03.775798+0000 osd.0 (osd.0) 132 : cluster [DBG] 3.17 scrub starts
Jan 29 09:35:49 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:17:03.786385+0000 osd.0 (osd.0) 133 : cluster [DBG] 3.17 scrub ok
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:35.718114+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 688128 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:36.718562+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 688128 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:37.718720+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 679936 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:38.719097+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 679936 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:39.719523+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 663552 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:40.720269+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 663552 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:41.720697+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 663552 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:42.721053+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 655360 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:43.721306+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 655360 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:44.721675+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 647168 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:45.721924+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 647168 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:46.722088+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 647168 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:47.722269+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 638976 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:48.722545+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 638976 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:49.722738+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 622592 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:50.722925+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 614400 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:51.723206+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 614400 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:52.723437+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 614400 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:53.723644+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 606208 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:54.724043+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 606208 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:55.724302+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 589824 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:56.724481+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 589824 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:57.724876+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 581632 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:58.725055+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 573440 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:59.725278+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 573440 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:00.725564+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 565248 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:01.725841+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 565248 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:02.726006+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 565248 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:03.726229+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 557056 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:04.726823+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 557056 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:05.727286+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 548864 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:06.727583+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 548864 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:07.727728+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 532480 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:08.727883+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 516096 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:09.728171+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 516096 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:10.728568+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 507904 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:11.728720+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 507904 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:12.728868+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 507904 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:13.729202+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 499712 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:14.729418+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 499712 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:15.729656+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 491520 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:16.729816+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 491520 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:17.730023+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 491520 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:18.730213+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 483328 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:19.730506+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 483328 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:20.730805+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 483328 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:21.730996+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 466944 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:22.731256+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 466944 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:23.731476+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 458752 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:24.731662+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 458752 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:25.731928+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 442368 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:26.732124+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 434176 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:27.732691+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 434176 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:28.732952+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 417792 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:29.733205+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 409600 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:30.733438+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 409600 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:31.733611+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 401408 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:32.733776+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 401408 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:33.733955+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 401408 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:34.734235+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 393216 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:35.734394+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 393216 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:36.734581+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 385024 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:37.734750+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 385024 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:38.734948+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 393216 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:39.735113+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 385024 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:40.735378+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 385024 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:41.735534+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 376832 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:42.735694+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 376832 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:43.735901+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 376832 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:44.736581+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 368640 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:45.736751+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 360448 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:46.736952+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 352256 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:47.737175+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 352256 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:48.737352+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 352256 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:49.737493+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 344064 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:50.737685+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 344064 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:51.737849+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 344064 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:52.738099+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 335872 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:53.738207+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 335872 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:54.738358+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 327680 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:55.738526+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 327680 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:56.738670+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 327680 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:57.738824+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 319488 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:58.739058+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 319488 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:59.739236+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 311296 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:00.739421+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 311296 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:01.739566+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:02.739737+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 311296 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:03.739909+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 303104 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:04.740079+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 303104 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:05.740259+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 294912 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:06.740524+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 294912 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:07.740745+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 294912 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:08.740965+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 286720 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:09.741099+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 286720 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:10.741290+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 286720 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:11.741422+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 278528 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:12.741625+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 278528 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:13.741814+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 262144 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:14.742096+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 262144 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:15.742303+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 262144 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:16.742495+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 253952 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:17.742707+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 253952 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:18.742974+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 253952 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:19.743270+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 245760 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:20.743534+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 245760 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:21.743797+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 237568 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:22.744013+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 237568 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:23.744237+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 237568 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:24.744409+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 229376 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:25.744588+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 229376 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:26.744908+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 229376 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:27.745107+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 221184 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:28.745256+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 221184 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:29.745423+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 221184 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:30.745717+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 212992 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:31.745900+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 212992 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:32.746115+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 204800 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:33.746308+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 204800 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:34.746516+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 204800 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:35.746730+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 196608 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:36.746923+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 196608 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:37.747101+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 188416 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:38.747284+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 188416 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:39.747450+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 188416 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:40.747631+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 180224 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:41.747763+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 180224 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:42.747899+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 180224 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:43.748053+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 172032 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 nova_compute[236255]: 2026-01-29 09:35:49.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:44.748227+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 172032 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:45.748455+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 172032 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:46.748649+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 163840 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:47.748834+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 163840 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:48.748985+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 155648 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:49.749233+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 147456 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:50.749457+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 147456 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:51.749811+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 139264 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:52.749965+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 139264 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:53.750113+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 131072 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:54.750293+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 131072 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:55.750439+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 131072 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:56.750649+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 122880 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:57.750834+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 122880 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:58.751030+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 122880 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:59.751197+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 114688 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:00.751473+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 114688 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:01.751633+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 106496 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:02.751954+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 106496 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:03.752183+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 106496 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:04.752355+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 98304 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:05.752592+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 98304 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:06.752856+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 90112 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:07.753077+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 90112 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:08.753315+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 90112 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:09.753533+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 81920 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:10.753750+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 81920 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:11.753975+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 81920 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:12.754211+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 73728 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:13.754364+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 73728 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:14.754587+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 65536 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:15.754882+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 65536 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:16.755050+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 65536 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:17.755241+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 57344 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:18.755439+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 57344 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:19.755579+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 57344 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:20.755787+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 49152 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:21.756000+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 49152 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:22.756236+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 40960 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:23.756459+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 40960 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:24.756669+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 40960 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:25.756839+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 32768 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:26.757030+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 32768 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:27.757263+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 24576 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:28.757495+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 24576 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:29.757640+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 24576 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:30.757905+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 16384 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:31.758077+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 16384 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:32.758269+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 16384 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:33.758482+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 8192 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:34.758635+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 8192 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:35.758828+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 0 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:36.758961+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 0 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:37.759123+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 1040384 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:38.759318+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 1040384 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:39.759480+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 1040384 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:40.759687+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 1032192 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:41.759881+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 1032192 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:42.760027+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 1032192 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:43.760236+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 1024000 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:44.760406+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 1024000 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:45.760642+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 1015808 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:46.760833+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 1015808 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:47.761047+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 1015808 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:48.761249+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 1007616 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:49.761419+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 1007616 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:50.761616+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 999424 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:51.761776+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 999424 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:52.761973+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 999424 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:53.762109+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 991232 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:54.762215+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 991232 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:55.762441+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 983040 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:56.762750+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 983040 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:57.762910+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 983040 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:58.763084+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 974848 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:59.763201+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 974848 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:00.763390+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 966656 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:01.763522+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 966656 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:02.763667+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 966656 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:03.763820+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 958464 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:04.764028+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 958464 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:05.764200+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 958464 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:06.764352+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 950272 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:07.764509+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 950272 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:08.764728+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 950272 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:09.764894+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 942080 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:10.765057+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 942080 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:11.765202+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 933888 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:12.765331+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 933888 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:13.765503+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 925696 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:14.765720+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 925696 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:15.765924+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 925696 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:16.766079+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 917504 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:17.766208+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 917504 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:18.766346+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 917504 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:19.766563+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 909312 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:20.766760+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 909312 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:21.766918+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 909312 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:22.767074+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 901120 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:23.767250+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 901120 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:24.767387+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 892928 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:25.767558+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 892928 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:26.767682+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 884736 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:27.767807+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 876544 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:28.767938+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 876544 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:29.768070+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 868352 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:30.768472+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 868352 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:31.768629+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 868352 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:32.768789+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 860160 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:33.768996+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 860160 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:34.769251+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 860160 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:35.769429+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 851968 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:36.769615+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 851968 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:37.769804+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 843776 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:38.770055+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 843776 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:39.770294+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 843776 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:40.770532+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 835584 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:41.770729+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 835584 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:42.770902+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 835584 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:43.771056+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 827392 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:44.771256+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 827392 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:45.771490+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 819200 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:46.771709+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 819200 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:47.771900+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 819200 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:48.772102+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 811008 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:49.772331+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 811008 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:50.772526+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 802816 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:51.772701+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 802816 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:52.772907+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 802816 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:53.773126+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 794624 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:54.773304+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 794624 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:55.773515+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 794624 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:56.773692+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 786432 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:57.773842+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 786432 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:58.773986+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 786432 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:59.774145+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 778240 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:00.774348+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 778240 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:01.774505+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 770048 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:02.774693+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 770048 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:03.774857+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 770048 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:04.775024+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 761856 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:05.775201+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 761856 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:06.775362+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 761856 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:07.775491+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 753664 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:08.775667+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 753664 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:09.775861+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 745472 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:10.776059+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 745472 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:11.776223+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 745472 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:12.776369+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 737280 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:13.776523+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 737280 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:14.776699+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 737280 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:15.776872+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 729088 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:16.777008+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 729088 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:17.777207+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 720896 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:18.777343+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 720896 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:19.777482+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 720896 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:20.777673+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 712704 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:21.777822+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 712704 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:22.777951+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 712704 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:23.778090+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 704512 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:24.778300+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 704512 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:25.778574+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 696320 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:26.778685+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 696320 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:27.778890+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 696320 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:28.779097+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 688128 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:29.779348+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 688128 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:30.779577+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 679936 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:31.779816+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 679936 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:32.779985+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 679936 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:33.780189+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 671744 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:34.780378+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 671744 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:35.780589+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 663552 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:36.780802+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 663552 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:37.781066+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 663552 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:38.781248+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 655360 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:39.781497+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 655360 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:40.781791+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 655360 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:41.782053+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 647168 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:42.782456+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 647168 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:43.782623+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 638976 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:44.782894+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 638976 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:45.783163+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 638976 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:46.783369+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 630784 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:47.783550+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 630784 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:48.783788+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 622592 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:49.784080+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 622592 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:50.784372+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 622592 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:51.784609+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 614400 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:52.784815+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 614400 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:53.784984+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 614400 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:54.785120+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 606208 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:55.785251+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 606208 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:56.785915+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 606208 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:57.786069+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 598016 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:58.786185+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 598016 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:59.786346+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 589824 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:00.786598+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 589824 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:01.786750+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 581632 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:02.786916+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 581632 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:03.787059+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 581632 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:04.787226+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64348160 unmapped: 573440 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:05.787357+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64348160 unmapped: 573440 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:06.787556+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64348160 unmapped: 573440 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:07.787681+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 565248 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:08.787795+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 565248 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:09.787929+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 565248 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:10.788104+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 557056 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:11.788281+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 557056 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:12.788464+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 557056 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:13.788671+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 548864 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:14.788816+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 548864 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:15.788989+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4241 writes, 19K keys, 4241 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4241 writes, 370 syncs, 11.46 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4241 writes, 19K keys, 4241 commit groups, 1.0 writes per commit group, ingest: 15.92 MB, 0.03 MB/s
                                           Interval WAL: 4241 writes, 370 syncs, 11.46 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 466944 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:16.789172+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 466944 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:17.789329+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 466944 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:18.789485+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 458752 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:19.790205+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 458752 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:20.790400+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 450560 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:21.791080+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 450560 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:22.791229+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 450560 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:23.791382+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 442368 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:24.791540+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 442368 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:25.791699+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 442368 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:26.791866+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 434176 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:27.792020+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 434176 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:28.792175+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:29.792416+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 425984 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:30.792727+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 425984 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:31.792916+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 425984 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:32.793185+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 417792 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:33.793392+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 417792 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:34.793567+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 409600 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:35.793723+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 409600 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:36.793891+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 409600 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:37.794031+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 401408 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:38.794190+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 401408 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:39.794341+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 401408 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:40.794565+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 393216 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:41.794697+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 393216 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:42.794849+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 393216 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:43.795035+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 385024 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:44.795272+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 385024 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:45.795424+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 376832 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:46.795582+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 376832 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:47.795736+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 376832 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:48.795891+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 368640 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:49.796047+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 368640 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:50.796256+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 360448 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:51.796434+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 360448 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:52.796600+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 360448 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:53.796741+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 352256 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:54.796923+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 352256 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:55.797126+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 352256 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:56.797319+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 344064 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:57.797484+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 344064 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:58.797639+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 335872 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:59.797781+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 335872 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:00.797950+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 335872 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:01.798083+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 327680 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:02.798425+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 327680 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:03.798625+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 319488 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:04.798775+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 319488 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:05.798972+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 319488 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:06.799086+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 311296 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:07.799231+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 311296 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:08.799414+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 311296 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:09.800072+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 303104 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:10.800314+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 303104 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:11.800469+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 294912 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:12.800603+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 294912 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:13.800732+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 294912 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:14.800862+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 286720 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:15.800997+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 286720 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:16.801162+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 286720 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:17.801267+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 278528 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:18.801413+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 278528 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:19.801561+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 270336 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:20.801747+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 270336 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:21.801877+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 270336 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:22.801996+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 262144 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:23.802233+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 262144 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:24.802385+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 253952 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:25.802562+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 253952 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:26.802722+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 253952 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:27.802914+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 245760 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:28.803075+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 245760 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:29.803194+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 245760 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:30.803352+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 237568 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:31.803498+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 237568 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:32.803675+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 229376 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:33.803820+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 229376 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:34.804004+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 229376 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:35.804152+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 221184 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:36.804299+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 221184 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:37.804493+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 221184 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:38.804653+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 212992 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:39.804781+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 212992 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:40.804933+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 204800 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:41.805059+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 204800 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:42.805209+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 204800 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:43.805338+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 196608 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:44.805533+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 196608 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:45.805662+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 188416 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:46.806072+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 188416 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:47.806342+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 188416 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:48.806467+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 180224 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:49.806586+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 180224 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:50.806737+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 180224 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:51.806865+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 172032 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:52.806981+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 172032 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:53.807108+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 172032 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:54.807285+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 163840 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:55.807432+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 163840 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:56.807614+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64765952 unmapped: 155648 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:57.807803+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64765952 unmapped: 155648 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:58.807954+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64765952 unmapped: 155648 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:59.808171+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 147456 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:00.808367+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 147456 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:01.808497+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64782336 unmapped: 139264 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:02.808604+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64782336 unmapped: 139264 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:03.808756+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64782336 unmapped: 139264 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:04.808912+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 131072 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:05.809778+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 131072 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:06.809991+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 122880 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:07.810239+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 122880 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:08.810416+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 122880 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:09.810587+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 114688 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:10.810825+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 114688 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:11.811006+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 114688 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:12.811201+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 106496 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:13.811437+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 106496 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:14.811625+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 98304 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:15.811803+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 98304 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:16.812112+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 98304 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:17.812316+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:18.812487+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:19.812650+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:20.812913+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:21.813087+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:22.813209+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:23.813394+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:24.813659+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:25.813807+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:26.813981+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:27.814101+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:28.814302+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:29.814502+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:30.814707+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:31.814865+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:32.815034+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:33.815226+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:34.815392+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:35.815535+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:36.815668+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:37.815872+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:38.816080+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:39.816256+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:40.816448+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:41.816626+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:42.816772+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:43.816955+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:44.817084+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:45.817221+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:46.817376+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:47.817558+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:48.817691+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:49.817856+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:50.818067+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:51.818194+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:52.818358+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:53.818716+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:54.818865+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:55.819055+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:56.819240+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:57.819391+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:58.819550+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:59.819699+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:00.819879+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:01.820038+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:02.820218+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:03.820433+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:04.820596+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:05.820746+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:06.820912+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:07.821092+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:08.821282+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:09.821507+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:10.821726+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:11.821956+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:12.822260+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:13.822385+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:14.822566+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:15.822772+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:16.822913+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:17.823199+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:18.823357+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:19.823507+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:20.823791+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:21.824004+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:22.824231+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:23.824428+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:24.824596+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:25.824774+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:26.824933+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:27.825099+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:28.825225+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:29.825368+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:30.825760+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:31.825916+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:32.826056+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:33.826235+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:34.826418+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:35.826599+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:36.826808+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:37.827001+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:38.827778+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:39.827945+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:40.828610+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:41.829003+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:42.829342+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:43.829491+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:44.830028+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:45.830197+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:46.830508+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:47.830657+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:48.830824+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:49.830992+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:50.831173+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:51.831320+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:52.831597+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:53.832025+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:54.832219+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:55.832367+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:56.832502+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:57.832633+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:58.832760+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:59.832895+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:00.833042+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:01.833173+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:02.833313+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:03.833475+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:04.833668+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:05.833814+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:06.834001+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:07.834243+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:08.834383+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:09.834528+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:10.834707+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:11.834894+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:12.835059+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:13.835255+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:14.835482+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:15.835627+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:16.835888+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:17.836100+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:18.836194+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:19.836552+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:20.836843+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:21.837067+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:22.837239+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:23.837399+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:24.837543+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:25.837709+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:26.837864+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:27.838063+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:28.838248+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:29.838415+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:30.838631+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:31.838837+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:32.839022+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:33.839229+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:34.839385+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:35.839545+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:36.839713+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:37.839937+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:38.840128+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:39.840300+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:40.840489+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:41.840625+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:42.840802+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:43.841023+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:44.841211+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:45.841357+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:46.841543+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:47.841734+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:48.841916+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:49.842156+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:50.842361+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:51.842593+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:52.842751+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:53.842991+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:54.843215+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:55.843365+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:56.843519+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:57.843678+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:58.843845+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:59.843995+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:00.844220+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:01.844387+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:02.844576+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:03.844710+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:04.844858+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:05.845028+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:06.845232+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:07.845416+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:08.845632+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:09.845786+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:10.845974+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:11.846155+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:12.846371+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:13.846626+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:14.846838+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:15.847011+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:16.847195+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:17.849570+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:18.849994+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:19.850866+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:20.851308+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:21.851612+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:22.852069+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:23.852375+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:24.852621+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:25.853860+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:26.854121+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:27.854320+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:28.854489+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:29.854661+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:30.854861+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:31.855415+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:32.855566+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:33.855766+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:34.855934+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:35.856154+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:36.856286+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:37.856874+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:38.857039+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:39.857344+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:40.857584+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:41.857880+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:42.858271+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:43.858474+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:44.858636+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 ms_handle_reset con 0x556d24953400 session 0x556d23683180
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273be000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:45.859071+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:46.859272+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:47.859455+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:48.860041+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:49.860232+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:50.860450+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:51.860619+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:52.860840+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:53.861068+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:54.861243+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:55.861448+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:56.861651+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:57.861827+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:58.861994+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:59.862223+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:00.862419+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:01.862646+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:02.862762+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:03.862889+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:04.863023+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:05.863176+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:06.863286+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:07.863399+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:08.863525+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:09.863626+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:10.863804+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:11.863943+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:12.864754+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:13.864880+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:14.865046+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:15.865211+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:16.865392+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:17.865520+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:18.865708+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:19.865874+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:20.866085+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:21.866262+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:22.866438+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:23.866619+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:24.877071+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:25.877194+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:26.877326+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:27.877505+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:28.877645+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:29.877848+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:30.878076+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:31.878236+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:32.878404+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:33.878621+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:34.878769+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:35.879020+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 ms_handle_reset con 0x556d25957000 session 0x556d25eaa540
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273be400
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:36.879207+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:37.879359+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:38.879501+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:39.879642+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:40.879861+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:41.880084+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:42.880260+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:43.880417+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:44.880557+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:45.880720+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:46.880912+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:47.881078+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:48.881267+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:49.881445+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:50.881653+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:51.881795+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:52.881988+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:53.882150+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:54.882330+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:55.882520+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:56.882717+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:57.882907+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:58.883089+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:59.883298+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:00.883502+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:01.883709+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:02.883905+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:03.884068+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:04.884278+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:05.884441+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:06.884604+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:07.884755+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:08.885028+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:09.885226+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:10.885539+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:11.885887+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:12.886117+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:13.886339+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:14.886605+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:15.886761+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:16.886897+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:17.887084+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:18.887301+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:19.887488+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:20.887733+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:21.887918+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:22.888086+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:23.888244+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:24.888363+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:25.888542+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:26.888681+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:27.888787+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:28.888899+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:29.888985+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:30.889209+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:31.889359+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:32.889472+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:33.889589+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:34.889668+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:35.889779+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:36.889893+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:37.890055+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:38.890208+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:39.890354+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:40.890505+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:41.890616+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:42.890739+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:43.890908+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:44.891053+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:45.891191+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:46.891415+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:47.891568+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:48.891707+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:49.891821+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:50.892024+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:51.892211+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:52.892296+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:53.892481+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:54.892668+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:55.892834+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:56.893008+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:57.893193+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:58.893400+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:59.893585+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:00.893830+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:01.893987+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:02.894159+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:03.894315+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:04.894492+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:05.894665+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:06.894783+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:07.894991+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:08.895239+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:09.895458+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:10.895708+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:11.895906+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:12.896053+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:13.896311+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:14.896492+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:15.896693+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:16.896871+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:17.897020+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:18.897183+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:19.897440+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:20.897709+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:21.897920+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:22.898156+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:23.898394+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:24.898656+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:25.898840+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:26.899034+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:27.899293+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:28.899507+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:29.899670+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:30.899892+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:31.900221+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:32.900423+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:33.900631+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:34.900839+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:35.901208+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:36.901450+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:37.901654+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:38.901869+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:39.902060+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:40.902467+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:41.902638+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:42.902812+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:43.902987+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:44.903123+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:45.903327+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:46.903573+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:47.903744+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:48.903911+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:49.904181+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:50.904429+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:51.904600+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:52.904739+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:53.904942+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:54.905103+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:55.905279+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:56.905439+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:57.905627+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:58.905770+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:59.905959+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:00.906160+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:01.906371+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:02.906574+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:03.906728+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:04.906888+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:05.907024+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:06.907257+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:07.907426+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:08.907639+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:09.907824+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:10.908046+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:11.908234+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:12.908406+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:13.908602+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:14.908775+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:15.908987+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:16.909166+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:17.909296+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:18.909439+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:19.909635+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:20.909872+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:21.910022+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:22.910164+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:23.910312+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:24.910469+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:25.910610+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:26.910795+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:27.911008+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:28.911165+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:29.911354+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:30.911600+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:31.911752+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:32.911909+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:33.912097+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:34.912278+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:35.912435+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:36.912613+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:37.912799+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:38.912983+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:39.913220+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:40.913569+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:41.913744+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:42.913897+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:43.914074+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:44.914280+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:45.914492+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:46.914669+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:47.914837+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:48.915000+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:49.915165+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:50.915466+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:51.915723+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:52.915923+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:53.916082+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:54.916228+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:55.916467+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:56.916737+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:57.916895+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:58.917080+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:59.917257+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:00.917441+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:01.917631+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:02.917743+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:03.917919+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:04.918076+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:05.918196+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:06.918328+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:07.918419+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:08.918628+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 46 handle_osd_map epochs [47,47], i have 46, src has [1,47]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 936.087219238s of 936.090454102s, submitted: 2
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:09.918735+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 47 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 47 handle_osd_map epochs [48,48], i have 47, src has [1,48]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:10.918860+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 507716 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 884736 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 48 handle_osd_map epochs [49,49], i have 48, src has [1,49]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:11.918981+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 49 ms_handle_reset con 0x556d25957000 session 0x556d2701a000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 811008 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:12.919193+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 49 heartbeat osd_stat(store_statfs(0x4fe14d000/0x0/0x4ffc00000, data 0x32429/0x7d000, compress 0x0/0x0/0x0, omap 0x6935, meta 0x1a296cb), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273be800
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 49 heartbeat osd_stat(store_statfs(0x4fe14d000/0x0/0x4ffc00000, data 0x32429/0x7d000, compress 0x0/0x0/0x0, omap 0x6935, meta 0x1a296cb), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:13.919328+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 49 handle_osd_map epochs [50,50], i have 49, src has [1,50]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 50 ms_handle_reset con 0x556d273be800 session 0x556d26aef500
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 50 heartbeat osd_stat(store_statfs(0x4fe14c000/0x0/0x4ffc00000, data 0x3244c/0x7e000, compress 0x0/0x0/0x0, omap 0x6935, meta 0x1a296cb), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:14.919518+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 524288 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:15.919670+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4304 writes, 19K keys, 4304 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4304 writes, 400 syncs, 10.76 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 63 writes, 170 keys, 63 commit groups, 1.0 writes per commit group, ingest: 0.17 MB, 0.00 MB/s
                                           Interval WAL: 63 writes, 30 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 516657 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 491520 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:16.919894+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 50 handle_osd_map epochs [51,51], i have 50, src has [1,51]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 50 handle_osd_map epochs [50,51], i have 51, src has [1,51]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:17.920093+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:18.920265+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe145000/0x0/0x4ffc00000, data 0x34f08/0x85000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:19.920408+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:20.920590+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 518517 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:21.920763+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe145000/0x0/0x4ffc00000, data 0x34f08/0x85000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:22.920886+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:23.921056+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:24.921218+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe145000/0x0/0x4ffc00000, data 0x34f08/0x85000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:25.921339+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 518517 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:26.921470+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:27.921621+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:28.921815+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:29.922015+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe145000/0x0/0x4ffc00000, data 0x34f08/0x85000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe145000/0x0/0x4ffc00000, data 0x34f08/0x85000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:30.922209+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 518517 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:31.922341+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe145000/0x0/0x4ffc00000, data 0x34f08/0x85000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:32.922476+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:33.922633+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:34.922810+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:35.923021+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe145000/0x0/0x4ffc00000, data 0x34f08/0x85000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 518517 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:36.923231+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:37.923440+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe145000/0x0/0x4ffc00000, data 0x34f08/0x85000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:38.923612+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:39.923801+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:40.923980+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe145000/0x0/0x4ffc00000, data 0x34f08/0x85000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 518517 data_alloc: 218103808 data_used: 934
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:41.924097+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bec00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.821676254s of 32.106227875s, submitted: 42
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe145000/0x0/0x4ffc00000, data 0x34f08/0x85000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bf000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 147456 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 51 handle_osd_map epochs [52,52], i have 51, src has [1,52]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 51 handle_osd_map epochs [51,52], i have 52, src has [1,52]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:42.924253+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 52 ms_handle_reset con 0x556d273bec00 session 0x556d2701a8c0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 52 ms_handle_reset con 0x556d273bf000 session 0x556d2714bc00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 52 heartbeat osd_stat(store_statfs(0x4fe144000/0x0/0x4ffc00000, data 0x3533b/0x88000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bf400
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 52 ms_handle_reset con 0x556d273bf400 session 0x556d2747c540
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 1179648 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 52 ms_handle_reset con 0x556d25957000 session 0x556d2747ca80
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:43.924382+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273be800
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 1130496 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 52 handle_osd_map epochs [52,53], i have 53, src has [1,53]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:44.924500+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 53 ms_handle_reset con 0x556d273be800 session 0x556d2747cfc0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bf000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 53 ms_handle_reset con 0x556d273bf000 session 0x556d271cd880
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749e400
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 53 ms_handle_reset con 0x556d2749e400 session 0x556d26aee1c0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 2269184 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:45.924633+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749e800
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749ec00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 578074 data_alloc: 218103808 data_used: 1958
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 10272768 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:46.924777+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 53 handle_osd_map epochs [54,54], i have 53, src has [1,54]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 54 ms_handle_reset con 0x556d2749e800 session 0x556d271cce00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 18685952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:47.924941+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 54 heartbeat osd_stat(store_statfs(0x4fd134000/0x0/0x4ffc00000, data 0x1039529/0x1093000, compress 0x0/0x0/0x0, omap 0x7edb, meta 0x1a28125), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 54 heartbeat osd_stat(store_statfs(0x4fc939000/0x0/0x4ffc00000, data 0x1839529/0x1893000, compress 0x0/0x0/0x0, omap 0x7edb, meta 0x1a28125), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 18685952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:48.925064+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 54 handle_osd_map epochs [55,55], i have 54, src has [1,55]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 55 ms_handle_reset con 0x556d2749ec00 session 0x556d271ae380
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 55 ms_handle_reset con 0x556d25957000 session 0x556d25eaac40
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 18620416 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:49.925226+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749f000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 18620416 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:50.925560+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 55 heartbeat osd_stat(store_statfs(0x4fc936000/0x0/0x4ffc00000, data 0x183a727/0x1893000, compress 0x0/0x0/0x0, omap 0x82b9, meta 0x1a27d47), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 56 ms_handle_reset con 0x556d2749f000 session 0x556d26fde380
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749f400
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 546369 data_alloc: 218103808 data_used: 5011
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 18579456 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:51.925726+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 56 handle_osd_map epochs [57,57], i have 56, src has [1,57]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.260757446s of 10.046355247s, submitted: 121
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 57 ms_handle_reset con 0x556d2749f400 session 0x556d23683180
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68419584 unmapped: 18530304 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:52.925954+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bf400
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 58 heartbeat osd_stat(store_statfs(0x4fe130000/0x0/0x4ffc00000, data 0x3d346/0x97000, compress 0x0/0x0/0x0, omap 0x8acf, meta 0x1a27531), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 18391040 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:53.926085+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 59 ms_handle_reset con 0x556d273bf400 session 0x556d2701b500
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:54.926253+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 18464768 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 59 handle_osd_map epochs [59,60], i have 59, src has [1,60]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 60 ms_handle_reset con 0x556d25957000 session 0x556d271cda40
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:55.926381+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 18456576 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749ec00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 60 handle_osd_map epochs [61,61], i have 60, src has [1,61]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 61 ms_handle_reset con 0x556d2749ec00 session 0x556d252a7c00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 563435 data_alloc: 218103808 data_used: 9056
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:56.926530+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 18440192 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 61 heartbeat osd_stat(store_statfs(0x4fe126000/0x0/0x4ffc00000, data 0x4144c/0xa0000, compress 0x0/0x0/0x0, omap 0x9421, meta 0x1a26bdf), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:57.926677+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 18399232 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 61 handle_osd_map epochs [62,62], i have 61, src has [1,62]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749f000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:58.926763+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 18333696 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 62 heartbeat osd_stat(store_statfs(0x4fe11b000/0x0/0x4ffc00000, data 0x43f54/0xa7000, compress 0x0/0x0/0x0, omap 0x9a46, meta 0x1a265ba), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:59.926913+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 18333696 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 62 ms_handle_reset con 0x556d2749f000 session 0x556d2716e700
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 62 handle_osd_map epochs [63,63], i have 62, src has [1,63]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749f400
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:00.927063+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 18317312 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 64 ms_handle_reset con 0x556d2749f400 session 0x556d26c35340
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x461e3/0xab000, compress 0x0/0x0/0x0, omap 0x9cb3, meta 0x1a2634d), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bec00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe118000/0x0/0x4ffc00000, data 0x483c5/0xb0000, compress 0x0/0x0/0x0, omap 0x9f24, meta 0x1a260dc), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 580573 data_alloc: 218103808 data_used: 9056
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:01.927266+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 17039360 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 65 ms_handle_reset con 0x556d273bec00 session 0x556d2701ae00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 65 ms_handle_reset con 0x556d25957000 session 0x556d25071c00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749ec00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749f000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.924083710s of 10.564991951s, submitted: 87
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:02.927508+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 17047552 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 66 ms_handle_reset con 0x556d2749f000 session 0x556d271aea80
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 66 ms_handle_reset con 0x556d2749ec00 session 0x556d250708c0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:03.927840+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749f400
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 17022976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 67 ms_handle_reset con 0x556d2749f400 session 0x556d26c34540
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749e800
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:04.928006+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 15753216 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 67 ms_handle_reset con 0x556d2749e800 session 0x556d25071340
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:05.928283+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 15794176 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 592591 data_alloc: 218103808 data_used: 9091
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:06.928452+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 15794176 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe109000/0x0/0x4ffc00000, data 0x4deaa/0xbf000, compress 0x0/0x0/0x0, omap 0xacff, meta 0x1a25301), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:07.928623+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 15794176 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:08.929603+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 15892480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 70 ms_handle_reset con 0x556d25957000 session 0x556d25eab340
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749fc00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:09.930253+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 15859712 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 70 handle_osd_map epochs [70,71], i have 70, src has [1,71]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 71 ms_handle_reset con 0x556d2749fc00 session 0x556d251261c0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:10.931775+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 15826944 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749e400
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:11.931940+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 600150 data_alloc: 218103808 data_used: 9091
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 15826944 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 71 handle_osd_map epochs [71,72], i have 71, src has [1,72]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 72 ms_handle_reset con 0x556d2749e400 session 0x556d25eaba40
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:12.932273+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 15802368 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 72 heartbeat osd_stat(store_statfs(0x4fe100000/0x0/0x4ffc00000, data 0x535ad/0xca000, compress 0x0/0x0/0x0, omap 0xbbd2, meta 0x1a2442e), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749e000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.267245293s of 10.588221550s, submitted: 77
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:13.932930+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 15745024 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 73 ms_handle_reset con 0x556d2749e000 session 0x556d24ef16c0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749f800
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:14.933227+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 15392768 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 74 heartbeat osd_stat(store_statfs(0x4fe0fc000/0x0/0x4ffc00000, data 0x54b9b/0xcb000, compress 0x0/0x0/0x0, omap 0xc13f, meta 0x1a23ec1), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 74 ms_handle_reset con 0x556d2749f800 session 0x556d25070e00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:15.933636+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 15343616 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 75 heartbeat osd_stat(store_statfs(0x4fe0fe000/0x0/0x4ffc00000, data 0x555c3/0xcc000, compress 0x0/0x0/0x0, omap 0xc507, meta 0x1a23af9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 75 ms_handle_reset con 0x556d25957000 session 0x556d25d48c40
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:16.933818+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 607715 data_alloc: 218103808 data_used: 13152
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 15319040 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749e000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 76 ms_handle_reset con 0x556d2749e000 session 0x556d2716fdc0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749e400
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749fc00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:17.934029+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 15220736 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:18.934280+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 15155200 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:19.934665+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 15155200 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 77 heartbeat osd_stat(store_statfs(0x4fe0f0000/0x0/0x4ffc00000, data 0x58bb8/0xd4000, compress 0x0/0x0/0x0, omap 0xd09f, meta 0x1a22f61), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749f400
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:20.934967+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 14983168 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 78 ms_handle_reset con 0x556d2749f400 session 0x556d25071500
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:21.935235+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 622898 data_alloc: 218103808 data_used: 13152
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 14983168 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:22.935449+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 14983168 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749f000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 78 handle_osd_map epochs [80,80], i have 78, src has [1,80]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 78 handle_osd_map epochs [79,80], i have 78, src has [1,80]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.230520248s of 10.425002098s, submitted: 107
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 80 ms_handle_reset con 0x556d2749f000 session 0x556d24ef0c40
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:23.935690+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749ec00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 14548992 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 80 ms_handle_reset con 0x556d2749ec00 session 0x556d2716f880
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749ec00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 80 handle_osd_map epochs [80,81], i have 80, src has [1,81]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 81 ms_handle_reset con 0x556d2749ec00 session 0x556d27029dc0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:24.935986+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 14508032 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 82 heartbeat osd_stat(store_statfs(0x4fe0ea000/0x0/0x4ffc00000, data 0x5e2cb/0xe0000, compress 0x0/0x0/0x0, omap 0xda52, meta 0x1a225ae), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 82 ms_handle_reset con 0x556d25957000 session 0x556d2714ce00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:25.936225+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 14467072 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957800
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d25957800 session 0x556d27193c00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:26.936436+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 639125 data_alloc: 218103808 data_used: 13152
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 14368768 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:27.936632+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 14368768 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:28.936789+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 14368768 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:29.936904+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 83 heartbeat osd_stat(store_statfs(0x4fe0df000/0x0/0x4ffc00000, data 0x60f1d/0xe7000, compress 0x0/0x0/0x0, omap 0xe432, meta 0x1a21bce), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 14368768 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25e6a400
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d25e6a400 session 0x556d25eaaa80
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d24952800
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d24952800 session 0x556d25126700
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d25957000 session 0x556d27007500
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 83 heartbeat osd_stat(store_statfs(0x4fe0df000/0x0/0x4ffc00000, data 0x60f1d/0xe7000, compress 0x0/0x0/0x0, omap 0xe432, meta 0x1a21bce), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread fragmentation_score=0.000120 took=0.000025s
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957800
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d25957800 session 0x556d2528b880
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2761c000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2761d400
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:30.937045+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d2761d400 session 0x556d25071340
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d2761c000 session 0x556d27137500
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25e6ac00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d25e6ac00 session 0x556d271ae380
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25e6ac00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d25e6ac00 session 0x556d2716f880
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d25957000 session 0x556d25eaaa80
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273be800
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d273be800 session 0x556d24ef0a80
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 14229504 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bf000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d273bf000 session 0x556d271af6c0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:31.937189+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 639081 data_alloc: 218103808 data_used: 13152
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 14213120 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bfc00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d273bfc00 session 0x556d25070a80
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bfc00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 84 ms_handle_reset con 0x556d273bfc00 session 0x556d25070000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bf800
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 84 ms_handle_reset con 0x556d273bf800 session 0x556d25eaac40
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:32.937325+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 14163968 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:33.937436+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 84 heartbeat osd_stat(store_statfs(0x4fe0df000/0x0/0x4ffc00000, data 0x62415/0xeb000, compress 0x0/0x0/0x0, omap 0xec3e, meta 0x1a213c2), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 14163968 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bec00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25148800
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.474985123s of 10.634794235s, submitted: 90
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:34.937585+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 14123008 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:35.937809+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 14123008 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:36.938002+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 644262 data_alloc: 218103808 data_used: 13152
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 14123008 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25148000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 84 ms_handle_reset con 0x556d25148000 session 0x556d2528ae00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2761d400
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:37.938230+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 14065664 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 85 ms_handle_reset con 0x556d2761d400 session 0x556d270061c0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2761c000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2761c400
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 85 ms_handle_reset con 0x556d2761c400 session 0x556d26aeee00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 85 ms_handle_reset con 0x556d2761c000 session 0x556d270068c0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2761dc00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 85 ms_handle_reset con 0x556d2761dc00 session 0x556d2714a700
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2761d800
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:38.938458+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 13778944 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 86 ms_handle_reset con 0x556d2761d800 session 0x556d26fdf340
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 86 heartbeat osd_stat(store_statfs(0x4fcf3c000/0x0/0x4ffc00000, data 0x639e2/0xee000, compress 0x0/0x0/0x0, omap 0xeec4, meta 0x2bc113c), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:39.938699+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 13746176 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 86 heartbeat osd_stat(store_statfs(0x4fcf3c000/0x0/0x4ffc00000, data 0x639e2/0xee000, compress 0x0/0x0/0x0, omap 0xeec4, meta 0x2bc113c), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:40.938961+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 13746176 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:41.939602+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 87 heartbeat osd_stat(store_statfs(0x4fcf39000/0x0/0x4ffc00000, data 0x65003/0xf1000, compress 0x0/0x0/0x0, omap 0xf14e, meta 0x2bc0eb2), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 653026 data_alloc: 218103808 data_used: 13168
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 13721600 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2761c800
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 87 ms_handle_reset con 0x556d2761c800 session 0x556d271afa40
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:42.939963+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2761d400
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 13557760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:43.940113+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 88 ms_handle_reset con 0x556d2761d400 session 0x556d271ae380
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 13533184 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:44.941071+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 13533184 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 88 ms_handle_reset con 0x556d273bec00 session 0x556d2528bc00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.035446167s of 11.152009964s, submitted: 83
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 88 ms_handle_reset con 0x556d25148800 session 0x556d23683180
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2761c000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 88 handle_osd_map epochs [88,89], i have 89, src has [1,89]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:45.941191+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 89 ms_handle_reset con 0x556d2761c000 session 0x556d271cd500
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 13418496 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:46.941500+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 656275 data_alloc: 218103808 data_used: 14350
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 13418496 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:47.941986+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 89 heartbeat osd_stat(store_statfs(0x4fcf32000/0x0/0x4ffc00000, data 0x69246/0xf8000, compress 0x0/0x0/0x0, omap 0x100fa, meta 0x2bbff06), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 13418496 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 90 ms_handle_reset con 0x556d2749e400 session 0x556d26c34380
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 90 ms_handle_reset con 0x556d2749fc00 session 0x556d25d481c0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:48.942431+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25148800
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 90 ms_handle_reset con 0x556d25148800 session 0x556d25d496c0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 13500416 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 90 heartbeat osd_stat(store_statfs(0x4fcf31000/0x0/0x4ffc00000, data 0x6a74a/0xfb000, compress 0x0/0x0/0x0, omap 0x106a5, meta 0x2bbf95b), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:49.942800+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 13500416 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 90 heartbeat osd_stat(store_statfs(0x4fcf31000/0x0/0x4ffc00000, data 0x6a74a/0xfb000, compress 0x0/0x0/0x0, omap 0x106a5, meta 0x2bbf95b), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:50.943398+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bec00
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 90 ms_handle_reset con 0x556d273bec00 session 0x556d26fdfa40
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bf000
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 13500416 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:51.943715+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 91 ms_handle_reset con 0x556d273bf000 session 0x556d2701ba40
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 661855 data_alloc: 218103808 data_used: 14315
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 13475840 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:52.944030+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 92 heartbeat osd_stat(store_statfs(0x4fcf28000/0x0/0x4ffc00000, data 0x6d224/0x100000, compress 0x0/0x0/0x0, omap 0x10e51, meta 0x2bbf1af), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:53.944287+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 92 heartbeat osd_stat(store_statfs(0x4fcf28000/0x0/0x4ffc00000, data 0x6d224/0x100000, compress 0x0/0x0/0x0, omap 0x10e51, meta 0x2bbf1af), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:54.944762+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:55.944924+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:56.945184+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 665091 data_alloc: 218103808 data_used: 14315
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 92 handle_osd_map epochs [92,93], i have 93, src has [1,93]
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.064319611s of 12.219558716s, submitted: 94
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:57.945580+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:58.945730+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:59.945997+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:00.946300+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:01.946540+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:02.946770+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:03.946970+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:04.947187+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:05.947321+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:06.947461+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:07.947674+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:08.947934+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:09.948216+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:10.948433+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:11.948645+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:12.948808+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:13.948994+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:14.949156+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:15.949354+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:16.949537+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:17.949705+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:18.949873+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:19.950069+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:20.950323+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:21.950467+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:22.950870+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:23.951085+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:24.951204+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:25.951360+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:26.951526+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:27.951844+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:28.951983+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:29.952163+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:30.952317+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:31.952453+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:32.952877+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:33.953062+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:34.953226+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:35.953358+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:36.953524+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:37.953659+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:38.953795+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:39.954023+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:40.954192+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:41.954342+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:42.954516+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:43.954668+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:44.954795+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:45.954963+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:46.955078+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:47.955237+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:48.955480+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:49.955638+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:50.955820+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:51.955980+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:52.956162+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:53.956279+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:54.956545+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:55.956691+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:56.956849+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:57.956961+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:58.957197+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:59.957383+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:00.957553+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:01.957692+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:02.957829+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:03.957996+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:04.958145+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:05.958309+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:06.958434+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:07.958614+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:08.958748+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:09.958858+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:10.959016+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:11.959177+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14816 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:49 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2996753216' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 29 09:35:49 compute-0 ceph-mon[75183]: pgmap v800: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:49 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/949738712' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 29 09:35:49 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3070989499' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 29 09:35:49 compute-0 ceph-mon[75183]: from='client.14814 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:12.959355+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:13.959477+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:14.959589+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:15.959709+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 13393920 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:16.959863+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: do_command 'config diff' '{prefix=config diff}'
Jan 29 09:35:49 compute-0 ceph-osd[86001]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: do_command 'config show' '{prefix=config show}'
Jan 29 09:35:49 compute-0 ceph-osd[86001]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 29 09:35:49 compute-0 ceph-osd[86001]: do_command 'counter dump' '{prefix=counter dump}'
Jan 29 09:35:49 compute-0 ceph-osd[86001]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 29 09:35:49 compute-0 ceph-osd[86001]: do_command 'counter schema' '{prefix=counter schema}'
Jan 29 09:35:49 compute-0 ceph-osd[86001]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:35:49 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:35:49 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 12787712 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:17.960004+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 12992512 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:35:49 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:35:49 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:18.960151+0000)
Jan 29 09:35:49 compute-0 ceph-osd[86001]: do_command 'log dump' '{prefix=log dump}'
Jan 29 09:35:49 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14818 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:49 compute-0 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 09:35:50 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14820 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:50 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14824 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:50 compute-0 ceph-mon[75183]: from='client.14816 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:50 compute-0 ceph-mon[75183]: from='client.14818 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:50 compute-0 ceph-mon[75183]: from='client.14820 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v801: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:50 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14828 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 29 09:35:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4200987651' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 29 09:35:51 compute-0 podman[244824]: 2026-01-29 09:35:51.120715811 +0000 UTC m=+0.065321108 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 29 09:35:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 09:35:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/432959222' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:35:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 09:35:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/432959222' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:35:51 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14830 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Jan 29 09:35:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3594550481' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 29 09:35:51 compute-0 ceph-mon[75183]: from='client.14824 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:51 compute-0 ceph-mon[75183]: pgmap v801: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:51 compute-0 ceph-mon[75183]: from='client.14828 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:51 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/4200987651' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 29 09:35:51 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/432959222' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:35:51 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/432959222' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:35:51 compute-0 ceph-mon[75183]: from='client.14830 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:51 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3594550481' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 29 09:35:51 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14838 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:51 compute-0 systemd[1]: Starting Hostname Service...
Jan 29 09:35:52 compute-0 systemd[1]: Started Hostname Service.
Jan 29 09:35:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 29 09:35:52 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1332716006' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 29 09:35:52 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14842 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 29 09:35:52 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/538277431' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 29 09:35:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:35:52 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 29 09:35:52 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 29 09:35:52 compute-0 ceph-mon[75183]: from='client.14838 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1332716006' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 29 09:35:52 compute-0 ceph-mon[75183]: from='client.14842 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:35:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/538277431' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 29 09:35:52 compute-0 ceph-mon[75183]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 29 09:35:52 compute-0 ceph-mon[75183]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 29 09:35:52 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 29 09:35:52 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 29 09:35:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v802: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 29 09:35:53 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/225035771' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 29 09:35:53 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14856 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:53 compute-0 ceph-mon[75183]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 29 09:35:53 compute-0 ceph-mon[75183]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 29 09:35:53 compute-0 ceph-mon[75183]: pgmap v802: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:53 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/225035771' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 29 09:35:54 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 29 09:35:54 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3771325622' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 29 09:35:54 compute-0 podman[245347]: 2026-01-29 09:35:54.169792179 +0000 UTC m=+0.071295171 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 09:35:54 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Jan 29 09:35:54 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2059860266' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 29 09:35:54 compute-0 ceph-mon[75183]: from='client.14856 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:54 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3771325622' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 29 09:35:54 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2059860266' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 29 09:35:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v803: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:55 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 29 09:35:55 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2717823212' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 29 09:35:55 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 29 09:35:55 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1131095738' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 29 09:35:55 compute-0 ceph-mon[75183]: pgmap v803: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:55 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2717823212' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 29 09:35:55 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1131095738' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:35:56
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['volumes', 'images', 'vms', 'backups', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14866 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:35:56 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Jan 29 09:35:56 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1871004967' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:35:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v804: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:57 compute-0 ceph-mon[75183]: from='client.14866 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:57 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1871004967' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 29 09:35:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Jan 29 09:35:57 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/502109206' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Jan 29 09:35:57 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14872 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:35:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Jan 29 09:35:57 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3950958871' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Jan 29 09:35:58 compute-0 ceph-mon[75183]: pgmap v804: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:58 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/502109206' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Jan 29 09:35:58 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3950958871' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Jan 29 09:35:58 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14876 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:58 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14878 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v805: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:35:59 compute-0 ceph-mon[75183]: from='client.14872 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:59 compute-0 ceph-mon[75183]: from='client.14876 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:35:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0)
Jan 29 09:35:59 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3700047750' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Jan 29 09:35:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Jan 29 09:35:59 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1074146333' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Jan 29 09:35:59 compute-0 ovs-appctl[246568]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 29 09:35:59 compute-0 ovs-appctl[246572]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 29 09:35:59 compute-0 ovs-appctl[246578]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 29 09:36:00 compute-0 ceph-mon[75183]: from='client.14878 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:36:00 compute-0 ceph-mon[75183]: pgmap v805: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:00 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3700047750' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Jan 29 09:36:00 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1074146333' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Jan 29 09:36:00 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14884 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:36:00 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14886 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:36:00 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:36:00 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:36:00 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:36:00 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.756942845403104e-07 of space, bias 1.0, pg target 8.270828536209312e-05 quantized to 32 (current 32)
Jan 29 09:36:00 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:36:00 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.5000327611377854e-07 of space, bias 1.0, pg target 4.500098283413356e-05 quantized to 32 (current 32)
Jan 29 09:36:00 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:36:00 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:36:00 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:36:00 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006683123502180493 of space, bias 1.0, pg target 0.2004937050654148 quantized to 32 (current 32)
Jan 29 09:36:00 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:36:00 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2051960589356773e-06 of space, bias 4.0, pg target 0.0014462352707228128 quantized to 16 (current 32)
Jan 29 09:36:00 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:36:00 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:36:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v806: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:01 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Jan 29 09:36:01 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3094393025' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Jan 29 09:36:01 compute-0 ceph-mon[75183]: from='client.14884 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:36:01 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3094393025' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Jan 29 09:36:01 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0)
Jan 29 09:36:01 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4247367318' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Jan 29 09:36:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:36:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:36:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:36:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:36:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.756942845403104e-07 of space, bias 1.0, pg target 8.270828536209312e-05 quantized to 32 (current 32)
Jan 29 09:36:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:36:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.5000327611377854e-07 of space, bias 1.0, pg target 4.500098283413356e-05 quantized to 32 (current 32)
Jan 29 09:36:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:36:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:36:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:36:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006683123502180493 of space, bias 1.0, pg target 0.2004937050654148 quantized to 32 (current 32)
Jan 29 09:36:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:36:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2051960589356773e-06 of space, bias 4.0, pg target 0.0014462352707228128 quantized to 16 (current 32)
Jan 29 09:36:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:36:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:36:02 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14892 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:36:02 compute-0 ceph-mon[75183]: from='client.14886 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:36:02 compute-0 ceph-mon[75183]: pgmap v806: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:02 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/4247367318' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Jan 29 09:36:02 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14894 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:36:02 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:36:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v807: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 29 09:36:03 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1005121340' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 29 09:36:04 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Jan 29 09:36:04 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2718444240' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Jan 29 09:36:04 compute-0 ceph-mon[75183]: from='client.14892 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:36:04 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1005121340' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 29 09:36:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v808: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:05 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Jan 29 09:36:05 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/673357161' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
Jan 29 09:36:05 compute-0 ceph-mon[75183]: from='client.14894 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:36:05 compute-0 ceph-mon[75183]: pgmap v807: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:05 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2718444240' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Jan 29 09:36:05 compute-0 ceph-mon[75183]: pgmap v808: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:05 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/673357161' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
Jan 29 09:36:06 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14902 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:36:06 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Jan 29 09:36:06 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/947438926' entity='client.admin' cmd={"prefix": "df", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 29 09:36:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v809: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:36:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Jan 29 09:36:07 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3819780271' entity='client.admin' cmd={"prefix": "df", "format": "json-pretty"} : dispatch
Jan 29 09:36:07 compute-0 ceph-mon[75183]: from='client.14902 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:36:07 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/947438926' entity='client.admin' cmd={"prefix": "df", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 29 09:36:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Jan 29 09:36:08 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/268134448' entity='client.admin' cmd={"prefix": "fs dump", "format": "json-pretty"} : dispatch
Jan 29 09:36:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v810: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:36:09.039 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:36:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:36:09.041 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:36:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:36:09.041 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:36:09 compute-0 sudo[247774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:36:09 compute-0 sudo[247774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:36:09 compute-0 sudo[247774]: pam_unix(sudo:session): session closed for user root
Jan 29 09:36:09 compute-0 sudo[247810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 29 09:36:09 compute-0 sudo[247810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:36:09 compute-0 ceph-mon[75183]: pgmap v809: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:09 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3819780271' entity='client.admin' cmd={"prefix": "df", "format": "json-pretty"} : dispatch
Jan 29 09:36:09 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/268134448' entity='client.admin' cmd={"prefix": "fs dump", "format": "json-pretty"} : dispatch
Jan 29 09:36:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Jan 29 09:36:09 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2187066208' entity='client.admin' cmd={"prefix": "fs ls", "format": "json-pretty"} : dispatch
Jan 29 09:36:09 compute-0 podman[247893]: 2026-01-29 09:36:09.99027937 +0000 UTC m=+0.076426850 container exec 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 29 09:36:10 compute-0 podman[247893]: 2026-01-29 09:36:10.099701686 +0000 UTC m=+0.185849146 container exec_died 19fe20f3e43ea4db91e5316f7db856211c7be91325c80294eb1b9a961e753c53 (image=quay.io/ceph/ceph:v20, name=ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 09:36:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14912 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:36:10 compute-0 ceph-mon[75183]: pgmap v810: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:10 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2187066208' entity='client.admin' cmd={"prefix": "fs ls", "format": "json-pretty"} : dispatch
Jan 29 09:36:10 compute-0 ceph-mon[75183]: from='client.14912 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:36:10 compute-0 sudo[247810]: pam_unix(sudo:session): session closed for user root
Jan 29 09:36:10 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:36:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v811: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:10 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:36:10 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:36:10 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Jan 29 09:36:10 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2062446756' entity='client.admin' cmd={"prefix": "mds stat", "format": "json-pretty"} : dispatch
Jan 29 09:36:10 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:36:11 compute-0 sudo[248125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:36:11 compute-0 sudo[248125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:36:11 compute-0 sudo[248125]: pam_unix(sudo:session): session closed for user root
Jan 29 09:36:11 compute-0 sudo[248224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:36:11 compute-0 sudo[248224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:36:11 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Jan 29 09:36:11 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/284916905' entity='client.admin' cmd={"prefix": "mon dump", "format": "json-pretty"} : dispatch
Jan 29 09:36:11 compute-0 sudo[248224]: pam_unix(sudo:session): session closed for user root
Jan 29 09:36:11 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:36:11 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:36:11 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:36:11 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:36:11 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:36:11 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:36:11 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:36:11 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:36:11 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:36:11 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:36:11 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:36:11 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:36:11 compute-0 virtqemud[236585]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 29 09:36:11 compute-0 sudo[248478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:36:11 compute-0 sudo[248478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:36:11 compute-0 sudo[248478]: pam_unix(sudo:session): session closed for user root
Jan 29 09:36:11 compute-0 sudo[248513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:36:11 compute-0 sudo[248513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:36:11 compute-0 ceph-mon[75183]: pgmap v811: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:11 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:36:11 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2062446756' entity='client.admin' cmd={"prefix": "mds stat", "format": "json-pretty"} : dispatch
Jan 29 09:36:11 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:36:11 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/284916905' entity='client.admin' cmd={"prefix": "mon dump", "format": "json-pretty"} : dispatch
Jan 29 09:36:11 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:36:11 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:36:11 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:36:11 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:36:11 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:36:11 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:36:11 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14918 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:36:12 compute-0 podman[248574]: 2026-01-29 09:36:12.060890185 +0000 UTC m=+0.044802739 container create 17c7629fea6c34ce7dc4a7e4ab9fcbb3d79b6292686c5ee7e39f1e608ea2a1dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 29 09:36:12 compute-0 systemd[1]: Started libpod-conmon-17c7629fea6c34ce7dc4a7e4ab9fcbb3d79b6292686c5ee7e39f1e608ea2a1dc.scope.
Jan 29 09:36:12 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:36:12 compute-0 podman[248574]: 2026-01-29 09:36:12.04010608 +0000 UTC m=+0.024018654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:36:12 compute-0 podman[248574]: 2026-01-29 09:36:12.150682007 +0000 UTC m=+0.134594581 container init 17c7629fea6c34ce7dc4a7e4ab9fcbb3d79b6292686c5ee7e39f1e608ea2a1dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_yalow, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 29 09:36:12 compute-0 podman[248574]: 2026-01-29 09:36:12.161522172 +0000 UTC m=+0.145434726 container start 17c7629fea6c34ce7dc4a7e4ab9fcbb3d79b6292686c5ee7e39f1e608ea2a1dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 29 09:36:12 compute-0 brave_yalow[248617]: 167 167
Jan 29 09:36:12 compute-0 podman[248574]: 2026-01-29 09:36:12.169833258 +0000 UTC m=+0.153745832 container attach 17c7629fea6c34ce7dc4a7e4ab9fcbb3d79b6292686c5ee7e39f1e608ea2a1dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_yalow, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 09:36:12 compute-0 podman[248574]: 2026-01-29 09:36:12.170407414 +0000 UTC m=+0.154319988 container died 17c7629fea6c34ce7dc4a7e4ab9fcbb3d79b6292686c5ee7e39f1e608ea2a1dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 29 09:36:12 compute-0 systemd[1]: libpod-17c7629fea6c34ce7dc4a7e4ab9fcbb3d79b6292686c5ee7e39f1e608ea2a1dc.scope: Deactivated successfully.
Jan 29 09:36:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a630bf6a7fd6fef6e7bc18952a486badfbaa2dbd756966feafc5a00dcc6b9058-merged.mount: Deactivated successfully.
Jan 29 09:36:12 compute-0 podman[248574]: 2026-01-29 09:36:12.272872741 +0000 UTC m=+0.256785295 container remove 17c7629fea6c34ce7dc4a7e4ab9fcbb3d79b6292686c5ee7e39f1e608ea2a1dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 29 09:36:12 compute-0 systemd[1]: libpod-conmon-17c7629fea6c34ce7dc4a7e4ab9fcbb3d79b6292686c5ee7e39f1e608ea2a1dc.scope: Deactivated successfully.
Jan 29 09:36:12 compute-0 systemd[1]: Starting Time & Date Service...
Jan 29 09:36:12 compute-0 systemd[1]: Started Time & Date Service.
Jan 29 09:36:12 compute-0 podman[248662]: 2026-01-29 09:36:12.420587128 +0000 UTC m=+0.049400474 container create ce0abdcec4da3718f2a363acc1466cd944520a5a2191612b7533c359cba3ac2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_benz, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 29 09:36:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Jan 29 09:36:12 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2015572169' entity='client.admin' cmd={"prefix": "osd blocklist ls", "format": "json-pretty"} : dispatch
Jan 29 09:36:12 compute-0 systemd[1]: Started libpod-conmon-ce0abdcec4da3718f2a363acc1466cd944520a5a2191612b7533c359cba3ac2a.scope.
Jan 29 09:36:12 compute-0 podman[248662]: 2026-01-29 09:36:12.394723065 +0000 UTC m=+0.023536431 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:36:12 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0db3df4ec3bd5462251e708ec746762aacd16478d4cbbcdd9078c41c603926d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0db3df4ec3bd5462251e708ec746762aacd16478d4cbbcdd9078c41c603926d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0db3df4ec3bd5462251e708ec746762aacd16478d4cbbcdd9078c41c603926d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0db3df4ec3bd5462251e708ec746762aacd16478d4cbbcdd9078c41c603926d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0db3df4ec3bd5462251e708ec746762aacd16478d4cbbcdd9078c41c603926d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:36:12 compute-0 podman[248662]: 2026-01-29 09:36:12.528840573 +0000 UTC m=+0.157653949 container init ce0abdcec4da3718f2a363acc1466cd944520a5a2191612b7533c359cba3ac2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 09:36:12 compute-0 podman[248662]: 2026-01-29 09:36:12.535587486 +0000 UTC m=+0.164400832 container start ce0abdcec4da3718f2a363acc1466cd944520a5a2191612b7533c359cba3ac2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_benz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 29 09:36:12 compute-0 podman[248662]: 2026-01-29 09:36:12.548521488 +0000 UTC m=+0.177334834 container attach ce0abdcec4da3718f2a363acc1466cd944520a5a2191612b7533c359cba3ac2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:36:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v812: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:12 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14922 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:36:12 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:36:12 compute-0 ceph-mon[75183]: from='client.14918 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:36:12 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2015572169' entity='client.admin' cmd={"prefix": "osd blocklist ls", "format": "json-pretty"} : dispatch
Jan 29 09:36:12 compute-0 jovial_benz[248691]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:36:12 compute-0 jovial_benz[248691]: --> All data devices are unavailable
Jan 29 09:36:13 compute-0 podman[248662]: 2026-01-29 09:36:13.020455043 +0000 UTC m=+0.649268389 container died ce0abdcec4da3718f2a363acc1466cd944520a5a2191612b7533c359cba3ac2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_benz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:36:13 compute-0 systemd[1]: libpod-ce0abdcec4da3718f2a363acc1466cd944520a5a2191612b7533c359cba3ac2a.scope: Deactivated successfully.
Jan 29 09:36:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0db3df4ec3bd5462251e708ec746762aacd16478d4cbbcdd9078c41c603926d-merged.mount: Deactivated successfully.
Jan 29 09:36:13 compute-0 podman[248662]: 2026-01-29 09:36:13.073976849 +0000 UTC m=+0.702790195 container remove ce0abdcec4da3718f2a363acc1466cd944520a5a2191612b7533c359cba3ac2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_benz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 29 09:36:13 compute-0 systemd[1]: libpod-conmon-ce0abdcec4da3718f2a363acc1466cd944520a5a2191612b7533c359cba3ac2a.scope: Deactivated successfully.
Jan 29 09:36:13 compute-0 sudo[248513]: pam_unix(sudo:session): session closed for user root
Jan 29 09:36:13 compute-0 rsyslogd[998]: imjournal: 26074 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 29 09:36:13 compute-0 sudo[248847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:36:13 compute-0 sudo[248847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:36:13 compute-0 sudo[248847]: pam_unix(sudo:session): session closed for user root
Jan 29 09:36:13 compute-0 sudo[248876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:36:13 compute-0 sudo[248876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:36:13 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14924 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:36:13 compute-0 podman[248934]: 2026-01-29 09:36:13.548785073 +0000 UTC m=+0.040784460 container create a85743d190aabab7543442ef916dd9ff11056e4a023ae69ba57e7b882aab5ca2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Jan 29 09:36:13 compute-0 systemd[1]: Started libpod-conmon-a85743d190aabab7543442ef916dd9ff11056e4a023ae69ba57e7b882aab5ca2.scope.
Jan 29 09:36:13 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:36:13 compute-0 podman[248934]: 2026-01-29 09:36:13.531441741 +0000 UTC m=+0.023441148 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:36:13 compute-0 podman[248934]: 2026-01-29 09:36:13.635040879 +0000 UTC m=+0.127040296 container init a85743d190aabab7543442ef916dd9ff11056e4a023ae69ba57e7b882aab5ca2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_austin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:36:13 compute-0 podman[248934]: 2026-01-29 09:36:13.642667736 +0000 UTC m=+0.134667133 container start a85743d190aabab7543442ef916dd9ff11056e4a023ae69ba57e7b882aab5ca2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_austin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:36:13 compute-0 podman[248934]: 2026-01-29 09:36:13.646815179 +0000 UTC m=+0.138814576 container attach a85743d190aabab7543442ef916dd9ff11056e4a023ae69ba57e7b882aab5ca2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 29 09:36:13 compute-0 distracted_austin[248969]: 167 167
Jan 29 09:36:13 compute-0 systemd[1]: libpod-a85743d190aabab7543442ef916dd9ff11056e4a023ae69ba57e7b882aab5ca2.scope: Deactivated successfully.
Jan 29 09:36:13 compute-0 conmon[248969]: conmon a85743d190aabab75434 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a85743d190aabab7543442ef916dd9ff11056e4a023ae69ba57e7b882aab5ca2.scope/container/memory.events
Jan 29 09:36:13 compute-0 podman[248934]: 2026-01-29 09:36:13.650438928 +0000 UTC m=+0.142438335 container died a85743d190aabab7543442ef916dd9ff11056e4a023ae69ba57e7b882aab5ca2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 29 09:36:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-7078b5939ae73e2a0e7994fb253755aa66560fd4987685c80ac9b105eee0ee07-merged.mount: Deactivated successfully.
Jan 29 09:36:13 compute-0 podman[248934]: 2026-01-29 09:36:13.700364446 +0000 UTC m=+0.192363833 container remove a85743d190aabab7543442ef916dd9ff11056e4a023ae69ba57e7b882aab5ca2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_austin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Jan 29 09:36:13 compute-0 systemd[1]: libpod-conmon-a85743d190aabab7543442ef916dd9ff11056e4a023ae69ba57e7b882aab5ca2.scope: Deactivated successfully.
Jan 29 09:36:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Jan 29 09:36:13 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1599282007' entity='client.admin' cmd={"prefix": "osd dump", "format": "json-pretty"} : dispatch
Jan 29 09:36:13 compute-0 podman[248992]: 2026-01-29 09:36:13.859533835 +0000 UTC m=+0.050008571 container create 947aa195030a67fd19844188036fd5e32625a5d94056bbd1c652515b592b4e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 29 09:36:13 compute-0 ceph-mon[75183]: pgmap v812: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:13 compute-0 ceph-mon[75183]: from='client.14922 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:36:13 compute-0 ceph-mon[75183]: from='client.14924 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:36:13 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1599282007' entity='client.admin' cmd={"prefix": "osd dump", "format": "json-pretty"} : dispatch
Jan 29 09:36:13 compute-0 systemd[1]: Started libpod-conmon-947aa195030a67fd19844188036fd5e32625a5d94056bbd1c652515b592b4e4a.scope.
Jan 29 09:36:13 compute-0 podman[248992]: 2026-01-29 09:36:13.836357114 +0000 UTC m=+0.026831880 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:36:13 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:36:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac4eba03680b7003e98c9994f46c745a048284c2c0c88198df5241ba85d581d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:36:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac4eba03680b7003e98c9994f46c745a048284c2c0c88198df5241ba85d581d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:36:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac4eba03680b7003e98c9994f46c745a048284c2c0c88198df5241ba85d581d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:36:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac4eba03680b7003e98c9994f46c745a048284c2c0c88198df5241ba85d581d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:36:13 compute-0 podman[248992]: 2026-01-29 09:36:13.959087272 +0000 UTC m=+0.149562028 container init 947aa195030a67fd19844188036fd5e32625a5d94056bbd1c652515b592b4e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:36:13 compute-0 podman[248992]: 2026-01-29 09:36:13.967522542 +0000 UTC m=+0.157997278 container start 947aa195030a67fd19844188036fd5e32625a5d94056bbd1c652515b592b4e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_babbage, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 09:36:13 compute-0 podman[248992]: 2026-01-29 09:36:13.971249643 +0000 UTC m=+0.161724399 container attach 947aa195030a67fd19844188036fd5e32625a5d94056bbd1c652515b592b4e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 29 09:36:14 compute-0 determined_babbage[249013]: {
Jan 29 09:36:14 compute-0 determined_babbage[249013]:     "0": [
Jan 29 09:36:14 compute-0 determined_babbage[249013]:         {
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "devices": [
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "/dev/loop3"
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             ],
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "lv_name": "ceph_lv0",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "lv_size": "21470642176",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "name": "ceph_lv0",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "tags": {
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.cluster_name": "ceph",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.crush_device_class": "",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.encrypted": "0",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.objectstore": "bluestore",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.osd_id": "0",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.type": "block",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.vdo": "0",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.with_tpm": "0"
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             },
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "type": "block",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "vg_name": "ceph_vg0"
Jan 29 09:36:14 compute-0 determined_babbage[249013]:         }
Jan 29 09:36:14 compute-0 determined_babbage[249013]:     ],
Jan 29 09:36:14 compute-0 determined_babbage[249013]:     "1": [
Jan 29 09:36:14 compute-0 determined_babbage[249013]:         {
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "devices": [
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "/dev/loop4"
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             ],
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "lv_name": "ceph_lv1",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "lv_size": "21470642176",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "name": "ceph_lv1",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "tags": {
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.cluster_name": "ceph",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.crush_device_class": "",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.encrypted": "0",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.objectstore": "bluestore",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.osd_id": "1",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.type": "block",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.vdo": "0",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.with_tpm": "0"
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             },
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "type": "block",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "vg_name": "ceph_vg1"
Jan 29 09:36:14 compute-0 determined_babbage[249013]:         }
Jan 29 09:36:14 compute-0 determined_babbage[249013]:     ],
Jan 29 09:36:14 compute-0 determined_babbage[249013]:     "2": [
Jan 29 09:36:14 compute-0 determined_babbage[249013]:         {
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "devices": [
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "/dev/loop5"
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             ],
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "lv_name": "ceph_lv2",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "lv_size": "21470642176",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "name": "ceph_lv2",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "tags": {
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.cluster_name": "ceph",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.crush_device_class": "",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.encrypted": "0",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.objectstore": "bluestore",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.osd_id": "2",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.type": "block",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.vdo": "0",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:                 "ceph.with_tpm": "0"
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             },
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "type": "block",
Jan 29 09:36:14 compute-0 determined_babbage[249013]:             "vg_name": "ceph_vg2"
Jan 29 09:36:14 compute-0 determined_babbage[249013]:         }
Jan 29 09:36:14 compute-0 determined_babbage[249013]:     ]
Jan 29 09:36:14 compute-0 determined_babbage[249013]: }
Jan 29 09:36:14 compute-0 systemd[1]: libpod-947aa195030a67fd19844188036fd5e32625a5d94056bbd1c652515b592b4e4a.scope: Deactivated successfully.
Jan 29 09:36:14 compute-0 podman[248992]: 2026-01-29 09:36:14.287572756 +0000 UTC m=+0.478047502 container died 947aa195030a67fd19844188036fd5e32625a5d94056bbd1c652515b592b4e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_babbage, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:36:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-fac4eba03680b7003e98c9994f46c745a048284c2c0c88198df5241ba85d581d-merged.mount: Deactivated successfully.
Jan 29 09:36:14 compute-0 podman[248992]: 2026-01-29 09:36:14.340611629 +0000 UTC m=+0.531086355 container remove 947aa195030a67fd19844188036fd5e32625a5d94056bbd1c652515b592b4e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_babbage, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:36:14 compute-0 systemd[1]: libpod-conmon-947aa195030a67fd19844188036fd5e32625a5d94056bbd1c652515b592b4e4a.scope: Deactivated successfully.
Jan 29 09:36:14 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Jan 29 09:36:14 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/477035191' entity='client.admin' cmd={"prefix": "osd numa-status", "format": "json-pretty"} : dispatch
Jan 29 09:36:14 compute-0 sudo[248876]: pam_unix(sudo:session): session closed for user root
Jan 29 09:36:14 compute-0 sudo[249055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:36:14 compute-0 sudo[249055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:36:14 compute-0 sudo[249055]: pam_unix(sudo:session): session closed for user root
Jan 29 09:36:14 compute-0 sudo[249081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:36:14 compute-0 sudo[249081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:36:14 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14930 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:36:14 compute-0 podman[249138]: 2026-01-29 09:36:14.775838016 +0000 UTC m=+0.043124384 container create 0d70d6477b27bf91ac1e5fdfe57917c2e67596b50c219397322a39ff8f57d0fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hoover, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:36:14 compute-0 systemd[1]: Started libpod-conmon-0d70d6477b27bf91ac1e5fdfe57917c2e67596b50c219397322a39ff8f57d0fc.scope.
Jan 29 09:36:14 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:36:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v813: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:14 compute-0 podman[249138]: 2026-01-29 09:36:14.756600853 +0000 UTC m=+0.023887241 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:36:14 compute-0 podman[249138]: 2026-01-29 09:36:14.873378119 +0000 UTC m=+0.140664507 container init 0d70d6477b27bf91ac1e5fdfe57917c2e67596b50c219397322a39ff8f57d0fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 29 09:36:14 compute-0 podman[249138]: 2026-01-29 09:36:14.880053981 +0000 UTC m=+0.147340339 container start 0d70d6477b27bf91ac1e5fdfe57917c2e67596b50c219397322a39ff8f57d0fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hoover, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:36:14 compute-0 epic_hoover[249157]: 167 167
Jan 29 09:36:14 compute-0 systemd[1]: libpod-0d70d6477b27bf91ac1e5fdfe57917c2e67596b50c219397322a39ff8f57d0fc.scope: Deactivated successfully.
Jan 29 09:36:14 compute-0 conmon[249157]: conmon 0d70d6477b27bf91ac1e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d70d6477b27bf91ac1e5fdfe57917c2e67596b50c219397322a39ff8f57d0fc.scope/container/memory.events
Jan 29 09:36:14 compute-0 podman[249138]: 2026-01-29 09:36:14.893674951 +0000 UTC m=+0.160961409 container attach 0d70d6477b27bf91ac1e5fdfe57917c2e67596b50c219397322a39ff8f57d0fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hoover, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 29 09:36:14 compute-0 podman[249138]: 2026-01-29 09:36:14.894601116 +0000 UTC m=+0.161887534 container died 0d70d6477b27bf91ac1e5fdfe57917c2e67596b50c219397322a39ff8f57d0fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hoover, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 29 09:36:14 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/477035191' entity='client.admin' cmd={"prefix": "osd numa-status", "format": "json-pretty"} : dispatch
Jan 29 09:36:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7d9a8b8a7455be572db8cd35e97e0568a91d25c9d48551e2060d0f3141b497c-merged.mount: Deactivated successfully.
Jan 29 09:36:14 compute-0 podman[249138]: 2026-01-29 09:36:14.942458068 +0000 UTC m=+0.209744426 container remove 0d70d6477b27bf91ac1e5fdfe57917c2e67596b50c219397322a39ff8f57d0fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:36:14 compute-0 systemd[1]: libpod-conmon-0d70d6477b27bf91ac1e5fdfe57917c2e67596b50c219397322a39ff8f57d0fc.scope: Deactivated successfully.
Jan 29 09:36:15 compute-0 podman[249200]: 2026-01-29 09:36:15.101802652 +0000 UTC m=+0.042372674 container create ee415fc46abd6d1470f12c61a4e92a5369ee5ad5bcdfd8b8fd0edc3d9e4573ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:36:15 compute-0 systemd[1]: Started libpod-conmon-ee415fc46abd6d1470f12c61a4e92a5369ee5ad5bcdfd8b8fd0edc3d9e4573ab.scope.
Jan 29 09:36:15 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:36:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b727e94ef861b564254c47fe74fa48c5bb153e0251ad51c3c4d24eb96ae0fecf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:36:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b727e94ef861b564254c47fe74fa48c5bb153e0251ad51c3c4d24eb96ae0fecf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:36:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b727e94ef861b564254c47fe74fa48c5bb153e0251ad51c3c4d24eb96ae0fecf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:36:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b727e94ef861b564254c47fe74fa48c5bb153e0251ad51c3c4d24eb96ae0fecf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:36:15 compute-0 podman[249200]: 2026-01-29 09:36:15.080741619 +0000 UTC m=+0.021311671 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:36:15 compute-0 podman[249200]: 2026-01-29 09:36:15.195154741 +0000 UTC m=+0.135724793 container init ee415fc46abd6d1470f12c61a4e92a5369ee5ad5bcdfd8b8fd0edc3d9e4573ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Jan 29 09:36:15 compute-0 podman[249200]: 2026-01-29 09:36:15.202309765 +0000 UTC m=+0.142879797 container start ee415fc46abd6d1470f12c61a4e92a5369ee5ad5bcdfd8b8fd0edc3d9e4573ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_merkle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 29 09:36:15 compute-0 podman[249200]: 2026-01-29 09:36:15.210457047 +0000 UTC m=+0.151027079 container attach ee415fc46abd6d1470f12c61a4e92a5369ee5ad5bcdfd8b8fd0edc3d9e4573ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_merkle, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:36:15 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14932 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:36:15 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:36:15 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:36:15 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:36:15 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.756942845403104e-07 of space, bias 1.0, pg target 8.270828536209312e-05 quantized to 32 (current 32)
Jan 29 09:36:15 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:36:15 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.5000327611377854e-07 of space, bias 1.0, pg target 4.500098283413356e-05 quantized to 32 (current 32)
Jan 29 09:36:15 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:36:15 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:36:15 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:36:15 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006683123502180493 of space, bias 1.0, pg target 0.2004937050654148 quantized to 32 (current 32)
Jan 29 09:36:15 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:36:15 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2051960589356773e-06 of space, bias 4.0, pg target 0.0014462352707228128 quantized to 16 (current 32)
Jan 29 09:36:15 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:36:15 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:36:15 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Jan 29 09:36:15 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2340403760' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 29 09:36:15 compute-0 lvm[249320]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:36:15 compute-0 lvm[249319]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:36:15 compute-0 lvm[249319]: VG ceph_vg0 finished
Jan 29 09:36:15 compute-0 lvm[249320]: VG ceph_vg1 finished
Jan 29 09:36:15 compute-0 lvm[249324]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:36:15 compute-0 lvm[249324]: VG ceph_vg2 finished
Jan 29 09:36:15 compute-0 silly_merkle[249216]: {}
Jan 29 09:36:15 compute-0 systemd[1]: libpod-ee415fc46abd6d1470f12c61a4e92a5369ee5ad5bcdfd8b8fd0edc3d9e4573ab.scope: Deactivated successfully.
Jan 29 09:36:15 compute-0 podman[249200]: 2026-01-29 09:36:15.917587339 +0000 UTC m=+0.858157401 container died ee415fc46abd6d1470f12c61a4e92a5369ee5ad5bcdfd8b8fd0edc3d9e4573ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_merkle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:36:15 compute-0 systemd[1]: libpod-ee415fc46abd6d1470f12c61a4e92a5369ee5ad5bcdfd8b8fd0edc3d9e4573ab.scope: Consumed 1.055s CPU time.
Jan 29 09:36:15 compute-0 ceph-mon[75183]: from='client.14930 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:36:15 compute-0 ceph-mon[75183]: pgmap v813: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:15 compute-0 ceph-mon[75183]: from='client.14932 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:36:15 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2340403760' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 29 09:36:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-b727e94ef861b564254c47fe74fa48c5bb153e0251ad51c3c4d24eb96ae0fecf-merged.mount: Deactivated successfully.
Jan 29 09:36:16 compute-0 podman[249200]: 2026-01-29 09:36:16.0397192 +0000 UTC m=+0.980289222 container remove ee415fc46abd6d1470f12c61a4e92a5369ee5ad5bcdfd8b8fd0edc3d9e4573ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_merkle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 29 09:36:16 compute-0 systemd[1]: libpod-conmon-ee415fc46abd6d1470f12c61a4e92a5369ee5ad5bcdfd8b8fd0edc3d9e4573ab.scope: Deactivated successfully.
Jan 29 09:36:16 compute-0 sudo[249081]: pam_unix(sudo:session): session closed for user root
Jan 29 09:36:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:36:16 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:36:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:36:16 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:36:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Jan 29 09:36:16 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3484364515' entity='client.admin' cmd={"prefix": "osd stat", "format": "json-pretty"} : dispatch
Jan 29 09:36:16 compute-0 sudo[249360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:36:16 compute-0 sudo[249360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:36:16 compute-0 sudo[249360]: pam_unix(sudo:session): session closed for user root
Jan 29 09:36:16 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14938 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:36:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v814: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:17 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14940 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:36:17 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:36:17 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:36:17 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3484364515' entity='client.admin' cmd={"prefix": "osd stat", "format": "json-pretty"} : dispatch
Jan 29 09:36:17 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 29 09:36:17 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3816215393' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 29 09:36:17 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:36:18 compute-0 ceph-mon[75183]: from='client.14938 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:36:18 compute-0 ceph-mon[75183]: pgmap v814: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:18 compute-0 ceph-mon[75183]: from='client.14940 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:36:18 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3816215393' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 29 09:36:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v815: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Jan 29 09:36:18 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3862635913' entity='client.admin' cmd={"prefix": "time-sync-status", "format": "json-pretty"} : dispatch
Jan 29 09:36:19 compute-0 ceph-mon[75183]: pgmap v815: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:19 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3862635913' entity='client.admin' cmd={"prefix": "time-sync-status", "format": "json-pretty"} : dispatch
Jan 29 09:36:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v816: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:22 compute-0 ceph-mon[75183]: pgmap v816: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:22 compute-0 podman[249479]: 2026-01-29 09:36:22.224211686 +0000 UTC m=+0.156364034 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 29 09:36:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v817: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:22 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:36:24 compute-0 ceph-mon[75183]: pgmap v817: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v818: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:24 compute-0 podman[249506]: 2026-01-29 09:36:24.939231328 +0000 UTC m=+0.059603102 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 09:36:25 compute-0 ceph-mon[75183]: pgmap v818: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:36:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:36:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:36:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:36:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:36:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:36:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v819: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:36:28 compute-0 ceph-mon[75183]: pgmap v819: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v820: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:30 compute-0 ceph-mon[75183]: pgmap v820: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v821: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:32 compute-0 ceph-mon[75183]: pgmap v821: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v822: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:36:33 compute-0 ceph-mon[75183]: pgmap v822: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v823: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v824: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:36:38 compute-0 ceph-mon[75183]: pgmap v823: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v825: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v826: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:41 compute-0 ceph-mon[75183]: pgmap v824: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:42 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 29 09:36:42 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 29 09:36:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v827: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:36:43 compute-0 ceph-mon[75183]: pgmap v825: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:43 compute-0 ceph-mon[75183]: pgmap v826: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:43 compute-0 nova_compute[236255]: 2026-01-29 09:36:43.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:36:43 compute-0 nova_compute[236255]: 2026-01-29 09:36:43.557 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 29 09:36:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v828: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:44 compute-0 sudo[242173]: pam_unix(sudo:session): session closed for user root
Jan 29 09:36:44 compute-0 sshd-session[242172]: Received disconnect from 192.168.122.10 port 51804:11: disconnected by user
Jan 29 09:36:44 compute-0 sshd-session[242172]: Disconnected from user zuul 192.168.122.10 port 51804
Jan 29 09:36:44 compute-0 sshd-session[242169]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:36:44 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Jan 29 09:36:44 compute-0 systemd[1]: session-51.scope: Consumed 2min 32.165s CPU time, 670.7M memory peak, read 241.4M from disk, written 187.8M to disk.
Jan 29 09:36:44 compute-0 systemd-logind[799]: Session 51 logged out. Waiting for processes to exit.
Jan 29 09:36:44 compute-0 systemd-logind[799]: Removed session 51.
Jan 29 09:36:45 compute-0 sshd-session[249530]: Accepted publickey for zuul from 192.168.122.10 port 40234 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:36:45 compute-0 systemd-logind[799]: New session 52 of user zuul.
Jan 29 09:36:45 compute-0 systemd[1]: Started Session 52 of User zuul.
Jan 29 09:36:45 compute-0 sshd-session[249530]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:36:45 compute-0 sudo[249534]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2026-01-29-awpfyts.tar.xz
Jan 29 09:36:45 compute-0 sudo[249534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:36:45 compute-0 sudo[249534]: pam_unix(sudo:session): session closed for user root
Jan 29 09:36:45 compute-0 sshd-session[249533]: Received disconnect from 192.168.122.10 port 40234:11: disconnected by user
Jan 29 09:36:45 compute-0 sshd-session[249533]: Disconnected from user zuul 192.168.122.10 port 40234
Jan 29 09:36:45 compute-0 sshd-session[249530]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:36:45 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Jan 29 09:36:45 compute-0 systemd-logind[799]: Session 52 logged out. Waiting for processes to exit.
Jan 29 09:36:45 compute-0 systemd-logind[799]: Removed session 52.
Jan 29 09:36:45 compute-0 sshd-session[249559]: Accepted publickey for zuul from 192.168.122.10 port 40240 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:36:45 compute-0 systemd-logind[799]: New session 53 of user zuul.
Jan 29 09:36:45 compute-0 systemd[1]: Started Session 53 of User zuul.
Jan 29 09:36:45 compute-0 sshd-session[249559]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:36:45 compute-0 sudo[249563]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Jan 29 09:36:45 compute-0 sudo[249563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:36:45 compute-0 sudo[249563]: pam_unix(sudo:session): session closed for user root
Jan 29 09:36:45 compute-0 sshd-session[249562]: Received disconnect from 192.168.122.10 port 40240:11: disconnected by user
Jan 29 09:36:45 compute-0 sshd-session[249562]: Disconnected from user zuul 192.168.122.10 port 40240
Jan 29 09:36:45 compute-0 sshd-session[249559]: pam_unix(sshd:session): session closed for user zuul
Jan 29 09:36:45 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Jan 29 09:36:45 compute-0 systemd-logind[799]: Session 53 logged out. Waiting for processes to exit.
Jan 29 09:36:45 compute-0 systemd-logind[799]: Removed session 53.
Jan 29 09:36:45 compute-0 ceph-mon[75183]: pgmap v827: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:45 compute-0 nova_compute[236255]: 2026-01-29 09:36:45.557 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:36:46 compute-0 nova_compute[236255]: 2026-01-29 09:36:46.551 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:36:46 compute-0 ceph-mon[75183]: pgmap v828: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v829: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:47 compute-0 nova_compute[236255]: 2026-01-29 09:36:47.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:36:47 compute-0 nova_compute[236255]: 2026-01-29 09:36:47.556 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:36:47 compute-0 nova_compute[236255]: 2026-01-29 09:36:47.556 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:36:47 compute-0 nova_compute[236255]: 2026-01-29 09:36:47.556 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:36:47 compute-0 nova_compute[236255]: 2026-01-29 09:36:47.589 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:36:47 compute-0 nova_compute[236255]: 2026-01-29 09:36:47.590 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:36:47 compute-0 nova_compute[236255]: 2026-01-29 09:36:47.590 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:36:47 compute-0 nova_compute[236255]: 2026-01-29 09:36:47.590 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 29 09:36:47 compute-0 nova_compute[236255]: 2026-01-29 09:36:47.590 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:36:47 compute-0 ceph-mon[75183]: pgmap v829: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:36:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:36:48 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/931022231' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:36:48 compute-0 nova_compute[236255]: 2026-01-29 09:36:48.153 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:36:48 compute-0 nova_compute[236255]: 2026-01-29 09:36:48.302 236262 WARNING nova.virt.libvirt.driver [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 29 09:36:48 compute-0 nova_compute[236255]: 2026-01-29 09:36:48.304 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5099MB free_disk=59.98826471157372GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 29 09:36:48 compute-0 nova_compute[236255]: 2026-01-29 09:36:48.304 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:36:48 compute-0 nova_compute[236255]: 2026-01-29 09:36:48.304 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:36:48 compute-0 nova_compute[236255]: 2026-01-29 09:36:48.395 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 29 09:36:48 compute-0 nova_compute[236255]: 2026-01-29 09:36:48.396 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 29 09:36:48 compute-0 nova_compute[236255]: 2026-01-29 09:36:48.414 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:36:48 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/931022231' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:36:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v830: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:36:48 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1638113726' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:36:48 compute-0 nova_compute[236255]: 2026-01-29 09:36:48.941 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:36:48 compute-0 nova_compute[236255]: 2026-01-29 09:36:48.946 236262 DEBUG nova.compute.provider_tree [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed in ProviderTree for provider: 2689825d-8fa0-473a-adf1-5005faba9bec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 29 09:36:48 compute-0 nova_compute[236255]: 2026-01-29 09:36:48.964 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed for provider 2689825d-8fa0-473a-adf1-5005faba9bec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 29 09:36:48 compute-0 nova_compute[236255]: 2026-01-29 09:36:48.965 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 29 09:36:48 compute-0 nova_compute[236255]: 2026-01-29 09:36:48.965 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:36:49 compute-0 ceph-mon[75183]: pgmap v830: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:49 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1638113726' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:36:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v831: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:50 compute-0 nova_compute[236255]: 2026-01-29 09:36:50.965 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:36:50 compute-0 nova_compute[236255]: 2026-01-29 09:36:50.965 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 29 09:36:50 compute-0 nova_compute[236255]: 2026-01-29 09:36:50.965 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 29 09:36:51 compute-0 nova_compute[236255]: 2026-01-29 09:36:51.020 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 29 09:36:51 compute-0 nova_compute[236255]: 2026-01-29 09:36:51.020 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:36:51 compute-0 nova_compute[236255]: 2026-01-29 09:36:51.021 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:36:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 09:36:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3420636874' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:36:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 09:36:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3420636874' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:36:51 compute-0 ceph-mon[75183]: pgmap v831: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:51 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/3420636874' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:36:51 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/3420636874' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:36:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v832: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:36:53 compute-0 podman[249632]: 2026-01-29 09:36:53.17289623 +0000 UTC m=+0.110048964 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 29 09:36:53 compute-0 ceph-mon[75183]: pgmap v832: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v833: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:55 compute-0 podman[249658]: 2026-01-29 09:36:55.109082848 +0000 UTC m=+0.048766461 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 09:36:55 compute-0 ceph-mon[75183]: pgmap v833: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:36:56
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['.mgr', 'backups', 'cephfs.cephfs.data', 'vms', 'images', 'cephfs.cephfs.meta', 'volumes']
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:36:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v834: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:57 compute-0 ceph-mon[75183]: pgmap v834: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:36:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:36:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v835: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:00 compute-0 ceph-mon[75183]: pgmap v835: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v836: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:37:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:37:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:37:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:37:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.756942845403104e-07 of space, bias 1.0, pg target 8.270828536209312e-05 quantized to 32 (current 32)
Jan 29 09:37:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:37:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.5000327611377854e-07 of space, bias 1.0, pg target 4.500098283413356e-05 quantized to 32 (current 32)
Jan 29 09:37:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:37:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:37:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:37:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006683123502180493 of space, bias 1.0, pg target 0.2004937050654148 quantized to 32 (current 32)
Jan 29 09:37:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:37:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2051960589356773e-06 of space, bias 4.0, pg target 0.0014462352707228128 quantized to 16 (current 32)
Jan 29 09:37:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:37:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:37:02 compute-0 ceph-mon[75183]: pgmap v836: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v837: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:02 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:37:04 compute-0 ceph-mon[75183]: pgmap v837: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v838: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:06 compute-0 ceph-mon[75183]: pgmap v838: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v839: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:07 compute-0 ceph-mon[75183]: pgmap v839: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:07 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:37:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v840: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:37:09.040 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:37:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:37:09.041 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:37:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:37:09.041 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:37:10 compute-0 ceph-mon[75183]: pgmap v840: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v841: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:12 compute-0 ceph-mon[75183]: pgmap v841: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v842: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:37:14 compute-0 ceph-mon[75183]: pgmap v842: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v843: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:15 compute-0 ceph-mon[75183]: pgmap v843: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:16 compute-0 sudo[249678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:37:16 compute-0 sudo[249678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:37:16 compute-0 sudo[249678]: pam_unix(sudo:session): session closed for user root
Jan 29 09:37:16 compute-0 sudo[249703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:37:16 compute-0 sudo[249703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:37:16 compute-0 sudo[249703]: pam_unix(sudo:session): session closed for user root
Jan 29 09:37:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:37:16 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:37:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:37:16 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:37:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:37:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v844: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:16 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:37:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:37:16 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:37:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:37:16 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:37:16 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:37:16 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:37:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:37:16 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:37:16 compute-0 sudo[249758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:37:17 compute-0 sudo[249758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:37:17 compute-0 sudo[249758]: pam_unix(sudo:session): session closed for user root
Jan 29 09:37:17 compute-0 sudo[249783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:37:17 compute-0 sudo[249783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:37:17 compute-0 podman[249820]: 2026-01-29 09:37:17.292491665 +0000 UTC m=+0.018469468 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:37:17 compute-0 podman[249820]: 2026-01-29 09:37:17.503658837 +0000 UTC m=+0.229636610 container create d24564d845349af9b7c737e526a5c2a99c03c711068761dd5af4d2c35eeac9d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_cori, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 09:37:17 compute-0 systemd[1]: Started libpod-conmon-d24564d845349af9b7c737e526a5c2a99c03c711068761dd5af4d2c35eeac9d1.scope.
Jan 29 09:37:17 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:37:17 compute-0 podman[249820]: 2026-01-29 09:37:17.783608049 +0000 UTC m=+0.509585882 container init d24564d845349af9b7c737e526a5c2a99c03c711068761dd5af4d2c35eeac9d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_cori, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 09:37:17 compute-0 podman[249820]: 2026-01-29 09:37:17.78984037 +0000 UTC m=+0.515818153 container start d24564d845349af9b7c737e526a5c2a99c03c711068761dd5af4d2c35eeac9d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_cori, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:37:17 compute-0 condescending_cori[249836]: 167 167
Jan 29 09:37:17 compute-0 systemd[1]: libpod-d24564d845349af9b7c737e526a5c2a99c03c711068761dd5af4d2c35eeac9d1.scope: Deactivated successfully.
Jan 29 09:37:17 compute-0 podman[249820]: 2026-01-29 09:37:17.834833766 +0000 UTC m=+0.560811559 container attach d24564d845349af9b7c737e526a5c2a99c03c711068761dd5af4d2c35eeac9d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_cori, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 09:37:17 compute-0 podman[249820]: 2026-01-29 09:37:17.83608877 +0000 UTC m=+0.562066553 container died d24564d845349af9b7c737e526a5c2a99c03c711068761dd5af4d2c35eeac9d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_cori, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:37:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d0c940e8b2e42ddd98387a923a2cb22a199488224fda5f7d0827dc872ed3daa-merged.mount: Deactivated successfully.
Jan 29 09:37:17 compute-0 ceph-mon[75183]: pgmap v844: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:17 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:37:17 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:37:17 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:37:17 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:37:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:37:18 compute-0 podman[249820]: 2026-01-29 09:37:18.031475409 +0000 UTC m=+0.757453192 container remove d24564d845349af9b7c737e526a5c2a99c03c711068761dd5af4d2c35eeac9d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:37:18 compute-0 systemd[1]: libpod-conmon-d24564d845349af9b7c737e526a5c2a99c03c711068761dd5af4d2c35eeac9d1.scope: Deactivated successfully.
Jan 29 09:37:18 compute-0 podman[249861]: 2026-01-29 09:37:18.140240477 +0000 UTC m=+0.019745453 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:37:18 compute-0 podman[249861]: 2026-01-29 09:37:18.286824245 +0000 UTC m=+0.166329191 container create 86c9aa1bc6d87da257ab7649118ac910952bfd3e4b2f905e0456bdb2c98386d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_edison, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:37:18 compute-0 systemd[1]: Started libpod-conmon-86c9aa1bc6d87da257ab7649118ac910952bfd3e4b2f905e0456bdb2c98386d7.scope.
Jan 29 09:37:18 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cd5eb6253090e7fec3d0320711e92ae220da983355432b543772077d9c5336b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cd5eb6253090e7fec3d0320711e92ae220da983355432b543772077d9c5336b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cd5eb6253090e7fec3d0320711e92ae220da983355432b543772077d9c5336b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cd5eb6253090e7fec3d0320711e92ae220da983355432b543772077d9c5336b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cd5eb6253090e7fec3d0320711e92ae220da983355432b543772077d9c5336b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:37:18 compute-0 podman[249861]: 2026-01-29 09:37:18.625878351 +0000 UTC m=+0.505383327 container init 86c9aa1bc6d87da257ab7649118ac910952bfd3e4b2f905e0456bdb2c98386d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_edison, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:37:18 compute-0 podman[249861]: 2026-01-29 09:37:18.634411285 +0000 UTC m=+0.513916231 container start 86c9aa1bc6d87da257ab7649118ac910952bfd3e4b2f905e0456bdb2c98386d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_edison, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:37:18 compute-0 podman[249861]: 2026-01-29 09:37:18.695182015 +0000 UTC m=+0.574686961 container attach 86c9aa1bc6d87da257ab7649118ac910952bfd3e4b2f905e0456bdb2c98386d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_edison, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:37:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v845: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:19 compute-0 gracious_edison[249878]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:37:19 compute-0 gracious_edison[249878]: --> All data devices are unavailable
Jan 29 09:37:19 compute-0 systemd[1]: libpod-86c9aa1bc6d87da257ab7649118ac910952bfd3e4b2f905e0456bdb2c98386d7.scope: Deactivated successfully.
Jan 29 09:37:19 compute-0 podman[249861]: 2026-01-29 09:37:19.083156475 +0000 UTC m=+0.962661441 container died 86c9aa1bc6d87da257ab7649118ac910952bfd3e4b2f905e0456bdb2c98386d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_edison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 29 09:37:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cd5eb6253090e7fec3d0320711e92ae220da983355432b543772077d9c5336b-merged.mount: Deactivated successfully.
Jan 29 09:37:19 compute-0 podman[249861]: 2026-01-29 09:37:19.615763049 +0000 UTC m=+1.495268005 container remove 86c9aa1bc6d87da257ab7649118ac910952bfd3e4b2f905e0456bdb2c98386d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 29 09:37:19 compute-0 systemd[1]: libpod-conmon-86c9aa1bc6d87da257ab7649118ac910952bfd3e4b2f905e0456bdb2c98386d7.scope: Deactivated successfully.
Jan 29 09:37:19 compute-0 sudo[249783]: pam_unix(sudo:session): session closed for user root
Jan 29 09:37:19 compute-0 sudo[249913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:37:19 compute-0 sudo[249913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:37:19 compute-0 sudo[249913]: pam_unix(sudo:session): session closed for user root
Jan 29 09:37:19 compute-0 sudo[249938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:37:19 compute-0 sudo[249938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:37:20 compute-0 podman[249973]: 2026-01-29 09:37:20.018977106 +0000 UTC m=+0.031366072 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:37:20 compute-0 ceph-mon[75183]: pgmap v845: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:20 compute-0 podman[249973]: 2026-01-29 09:37:20.122981394 +0000 UTC m=+0.135370340 container create ff862e312d7e666f63aad295574c83c5e5789619cdcfd600a585482ba5e51919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 29 09:37:20 compute-0 systemd[1]: Started libpod-conmon-ff862e312d7e666f63aad295574c83c5e5789619cdcfd600a585482ba5e51919.scope.
Jan 29 09:37:20 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:37:20 compute-0 podman[249973]: 2026-01-29 09:37:20.259465894 +0000 UTC m=+0.271854870 container init ff862e312d7e666f63aad295574c83c5e5789619cdcfd600a585482ba5e51919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:37:20 compute-0 podman[249973]: 2026-01-29 09:37:20.264616046 +0000 UTC m=+0.277004992 container start ff862e312d7e666f63aad295574c83c5e5789619cdcfd600a585482ba5e51919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:37:20 compute-0 naughty_northcutt[249989]: 167 167
Jan 29 09:37:20 compute-0 systemd[1]: libpod-ff862e312d7e666f63aad295574c83c5e5789619cdcfd600a585482ba5e51919.scope: Deactivated successfully.
Jan 29 09:37:20 compute-0 podman[249973]: 2026-01-29 09:37:20.298605789 +0000 UTC m=+0.310994735 container attach ff862e312d7e666f63aad295574c83c5e5789619cdcfd600a585482ba5e51919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_northcutt, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 29 09:37:20 compute-0 podman[249973]: 2026-01-29 09:37:20.298958709 +0000 UTC m=+0.311347655 container died ff862e312d7e666f63aad295574c83c5e5789619cdcfd600a585482ba5e51919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_northcutt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:37:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-3403a1f0b1b11801e0f112c146ee74fa3cb520e0512aa75a3a195a2ac3b0179f-merged.mount: Deactivated successfully.
Jan 29 09:37:20 compute-0 podman[249973]: 2026-01-29 09:37:20.587436536 +0000 UTC m=+0.599825472 container remove ff862e312d7e666f63aad295574c83c5e5789619cdcfd600a585482ba5e51919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:37:20 compute-0 systemd[1]: libpod-conmon-ff862e312d7e666f63aad295574c83c5e5789619cdcfd600a585482ba5e51919.scope: Deactivated successfully.
Jan 29 09:37:20 compute-0 podman[250013]: 2026-01-29 09:37:20.741529559 +0000 UTC m=+0.059203327 container create 6c28038ccc8c1a74fee4595bcf94e11b663530d232dacfcc28c891fbd82dbb63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 29 09:37:20 compute-0 systemd[1]: Started libpod-conmon-6c28038ccc8c1a74fee4595bcf94e11b663530d232dacfcc28c891fbd82dbb63.scope.
Jan 29 09:37:20 compute-0 podman[250013]: 2026-01-29 09:37:20.7033415 +0000 UTC m=+0.021015248 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:37:20 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d9759864049d0a8ca357c673fd6cb9b0e4e5c78b8f958e99cd979f6cb4bac5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d9759864049d0a8ca357c673fd6cb9b0e4e5c78b8f958e99cd979f6cb4bac5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d9759864049d0a8ca357c673fd6cb9b0e4e5c78b8f958e99cd979f6cb4bac5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d9759864049d0a8ca357c673fd6cb9b0e4e5c78b8f958e99cd979f6cb4bac5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:37:20 compute-0 podman[250013]: 2026-01-29 09:37:20.83727695 +0000 UTC m=+0.154950698 container init 6c28038ccc8c1a74fee4595bcf94e11b663530d232dacfcc28c891fbd82dbb63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mcclintock, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 29 09:37:20 compute-0 podman[250013]: 2026-01-29 09:37:20.84271241 +0000 UTC m=+0.160386138 container start 6c28038ccc8c1a74fee4595bcf94e11b663530d232dacfcc28c891fbd82dbb63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mcclintock, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:37:20 compute-0 podman[250013]: 2026-01-29 09:37:20.85253486 +0000 UTC m=+0.170208618 container attach 6c28038ccc8c1a74fee4595bcf94e11b663530d232dacfcc28c891fbd82dbb63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:37:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v846: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]: {
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:     "0": [
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:         {
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "devices": [
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "/dev/loop3"
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             ],
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "lv_name": "ceph_lv0",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "lv_size": "21470642176",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "name": "ceph_lv0",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "tags": {
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.cluster_name": "ceph",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.crush_device_class": "",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.encrypted": "0",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.objectstore": "bluestore",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.osd_id": "0",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.type": "block",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.vdo": "0",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.with_tpm": "0"
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             },
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "type": "block",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "vg_name": "ceph_vg0"
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:         }
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:     ],
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:     "1": [
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:         {
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "devices": [
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "/dev/loop4"
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             ],
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "lv_name": "ceph_lv1",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "lv_size": "21470642176",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "name": "ceph_lv1",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "tags": {
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.cluster_name": "ceph",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.crush_device_class": "",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.encrypted": "0",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.objectstore": "bluestore",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.osd_id": "1",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.type": "block",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.vdo": "0",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.with_tpm": "0"
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             },
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "type": "block",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "vg_name": "ceph_vg1"
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:         }
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:     ],
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:     "2": [
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:         {
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "devices": [
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "/dev/loop5"
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             ],
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "lv_name": "ceph_lv2",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "lv_size": "21470642176",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "name": "ceph_lv2",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "tags": {
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.cluster_name": "ceph",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.crush_device_class": "",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.encrypted": "0",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.objectstore": "bluestore",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.osd_id": "2",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.type": "block",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.vdo": "0",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:                 "ceph.with_tpm": "0"
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             },
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "type": "block",
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:             "vg_name": "ceph_vg2"
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:         }
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]:     ]
Jan 29 09:37:21 compute-0 sharp_mcclintock[250029]: }
Jan 29 09:37:21 compute-0 systemd[1]: libpod-6c28038ccc8c1a74fee4595bcf94e11b663530d232dacfcc28c891fbd82dbb63.scope: Deactivated successfully.
Jan 29 09:37:21 compute-0 podman[250013]: 2026-01-29 09:37:21.123845914 +0000 UTC m=+0.441519672 container died 6c28038ccc8c1a74fee4595bcf94e11b663530d232dacfcc28c891fbd82dbb63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 09:37:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-64d9759864049d0a8ca357c673fd6cb9b0e4e5c78b8f958e99cd979f6cb4bac5-merged.mount: Deactivated successfully.
Jan 29 09:37:21 compute-0 podman[250013]: 2026-01-29 09:37:21.337647399 +0000 UTC m=+0.655321137 container remove 6c28038ccc8c1a74fee4595bcf94e11b663530d232dacfcc28c891fbd82dbb63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:37:21 compute-0 systemd[1]: libpod-conmon-6c28038ccc8c1a74fee4595bcf94e11b663530d232dacfcc28c891fbd82dbb63.scope: Deactivated successfully.
Jan 29 09:37:21 compute-0 sudo[249938]: pam_unix(sudo:session): session closed for user root
Jan 29 09:37:21 compute-0 sudo[250051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:37:21 compute-0 sudo[250051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:37:21 compute-0 sudo[250051]: pam_unix(sudo:session): session closed for user root
Jan 29 09:37:21 compute-0 sudo[250076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:37:21 compute-0 sudo[250076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:37:21 compute-0 podman[250113]: 2026-01-29 09:37:21.796094405 +0000 UTC m=+0.104741149 container create c53b1e7e0c8faadf236c3524267973316f84a08fb8647849a39a836c080e004e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_elgamal, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 29 09:37:21 compute-0 podman[250113]: 2026-01-29 09:37:21.718291377 +0000 UTC m=+0.026938141 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:37:21 compute-0 systemd[1]: Started libpod-conmon-c53b1e7e0c8faadf236c3524267973316f84a08fb8647849a39a836c080e004e.scope.
Jan 29 09:37:21 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:37:22 compute-0 podman[250113]: 2026-01-29 09:37:22.013626822 +0000 UTC m=+0.322273596 container init c53b1e7e0c8faadf236c3524267973316f84a08fb8647849a39a836c080e004e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 29 09:37:22 compute-0 podman[250113]: 2026-01-29 09:37:22.018723982 +0000 UTC m=+0.327370726 container start c53b1e7e0c8faadf236c3524267973316f84a08fb8647849a39a836c080e004e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:37:22 compute-0 compassionate_elgamal[250129]: 167 167
Jan 29 09:37:22 compute-0 systemd[1]: libpod-c53b1e7e0c8faadf236c3524267973316f84a08fb8647849a39a836c080e004e.scope: Deactivated successfully.
Jan 29 09:37:22 compute-0 podman[250113]: 2026-01-29 09:37:22.051533064 +0000 UTC m=+0.360179888 container attach c53b1e7e0c8faadf236c3524267973316f84a08fb8647849a39a836c080e004e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_elgamal, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:37:22 compute-0 podman[250113]: 2026-01-29 09:37:22.052046638 +0000 UTC m=+0.360693382 container died c53b1e7e0c8faadf236c3524267973316f84a08fb8647849a39a836c080e004e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 29 09:37:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-aec7d6ed7734350729d66c3487c49bc8ec1441b3dde134538b34fb9e3e044066-merged.mount: Deactivated successfully.
Jan 29 09:37:22 compute-0 ceph-mon[75183]: pgmap v846: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:22 compute-0 podman[250113]: 2026-01-29 09:37:22.202638045 +0000 UTC m=+0.511284789 container remove c53b1e7e0c8faadf236c3524267973316f84a08fb8647849a39a836c080e004e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_elgamal, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:37:22 compute-0 systemd[1]: libpod-conmon-c53b1e7e0c8faadf236c3524267973316f84a08fb8647849a39a836c080e004e.scope: Deactivated successfully.
Jan 29 09:37:22 compute-0 podman[250153]: 2026-01-29 09:37:22.365039097 +0000 UTC m=+0.049281925 container create aec5716be520532c188b34459732a5918ee4f25367c518602f16871715189b69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 09:37:22 compute-0 systemd[1]: Started libpod-conmon-aec5716be520532c188b34459732a5918ee4f25367c518602f16871715189b69.scope.
Jan 29 09:37:22 compute-0 podman[250153]: 2026-01-29 09:37:22.340714209 +0000 UTC m=+0.024957067 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:37:22 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/675563d4b4ee7fd5406c560bd722d378c77883dcd282619f51cd0b812789cc03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/675563d4b4ee7fd5406c560bd722d378c77883dcd282619f51cd0b812789cc03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/675563d4b4ee7fd5406c560bd722d378c77883dcd282619f51cd0b812789cc03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/675563d4b4ee7fd5406c560bd722d378c77883dcd282619f51cd0b812789cc03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:37:22 compute-0 podman[250153]: 2026-01-29 09:37:22.46739711 +0000 UTC m=+0.151639978 container init aec5716be520532c188b34459732a5918ee4f25367c518602f16871715189b69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:37:22 compute-0 podman[250153]: 2026-01-29 09:37:22.473900108 +0000 UTC m=+0.158142946 container start aec5716be520532c188b34459732a5918ee4f25367c518602f16871715189b69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chebyshev, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 29 09:37:22 compute-0 podman[250153]: 2026-01-29 09:37:22.481075415 +0000 UTC m=+0.165318283 container attach aec5716be520532c188b34459732a5918ee4f25367c518602f16871715189b69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chebyshev, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:37:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v847: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:37:23 compute-0 lvm[250249]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:37:23 compute-0 lvm[250248]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:37:23 compute-0 lvm[250249]: VG ceph_vg1 finished
Jan 29 09:37:23 compute-0 lvm[250248]: VG ceph_vg0 finished
Jan 29 09:37:23 compute-0 lvm[250251]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:37:23 compute-0 lvm[250251]: VG ceph_vg2 finished
Jan 29 09:37:23 compute-0 gifted_chebyshev[250170]: {}
Jan 29 09:37:23 compute-0 podman[250253]: 2026-01-29 09:37:23.336035865 +0000 UTC m=+0.086132728 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 29 09:37:23 compute-0 systemd[1]: libpod-aec5716be520532c188b34459732a5918ee4f25367c518602f16871715189b69.scope: Deactivated successfully.
Jan 29 09:37:23 compute-0 systemd[1]: libpod-aec5716be520532c188b34459732a5918ee4f25367c518602f16871715189b69.scope: Consumed 1.344s CPU time.
Jan 29 09:37:23 compute-0 podman[250153]: 2026-01-29 09:37:23.371329945 +0000 UTC m=+1.055572803 container died aec5716be520532c188b34459732a5918ee4f25367c518602f16871715189b69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 29 09:37:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-675563d4b4ee7fd5406c560bd722d378c77883dcd282619f51cd0b812789cc03-merged.mount: Deactivated successfully.
Jan 29 09:37:23 compute-0 podman[250153]: 2026-01-29 09:37:23.65106414 +0000 UTC m=+1.335306978 container remove aec5716be520532c188b34459732a5918ee4f25367c518602f16871715189b69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:37:23 compute-0 systemd[1]: libpod-conmon-aec5716be520532c188b34459732a5918ee4f25367c518602f16871715189b69.scope: Deactivated successfully.
Jan 29 09:37:23 compute-0 sudo[250076]: pam_unix(sudo:session): session closed for user root
Jan 29 09:37:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:37:23 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:37:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:37:23 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:37:23 compute-0 sudo[250297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:37:23 compute-0 sudo[250297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:37:23 compute-0 sudo[250297]: pam_unix(sudo:session): session closed for user root
Jan 29 09:37:24 compute-0 ceph-mon[75183]: pgmap v847: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:24 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:37:24 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:37:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v848: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:26 compute-0 podman[250322]: 2026-01-29 09:37:26.108057708 +0000 UTC m=+0.051383212 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 29 09:37:26 compute-0 ceph-mon[75183]: pgmap v848: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:37:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:37:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:37:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:37:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:37:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:37:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v849: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:27 compute-0 ceph-mon[75183]: pgmap v849: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:37:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v850: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:29 compute-0 ceph-mon[75183]: pgmap v850: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v851: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:32 compute-0 ceph-mon[75183]: pgmap v851: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v852: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:37:34 compute-0 ceph-mon[75183]: pgmap v852: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v853: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:36 compute-0 ceph-mon[75183]: pgmap v853: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v854: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:37:38 compute-0 ceph-mon[75183]: pgmap v854: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v855: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:40 compute-0 ceph-mon[75183]: pgmap v855: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v856: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:41 compute-0 ceph-mon[75183]: pgmap v856: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v857: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:42 compute-0 sshd-session[250342]: Received disconnect from 43.166.3.199 port 57160:11:  [preauth]
Jan 29 09:37:42 compute-0 sshd-session[250342]: Disconnected from authenticating user root 43.166.3.199 port 57160 [preauth]
Jan 29 09:37:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:37:44 compute-0 ceph-mon[75183]: pgmap v857: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:44 compute-0 nova_compute[236255]: 2026-01-29 09:37:44.556 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:37:44 compute-0 nova_compute[236255]: 2026-01-29 09:37:44.557 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 29 09:37:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v858: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:45 compute-0 nova_compute[236255]: 2026-01-29 09:37:45.556 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:37:46 compute-0 ceph-mon[75183]: pgmap v858: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v859: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:47 compute-0 nova_compute[236255]: 2026-01-29 09:37:47.551 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:37:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:37:48.011939) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679468011976, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2035, "num_deletes": 255, "total_data_size": 2185250, "memory_usage": 2227704, "flush_reason": "Manual Compaction"}
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679468021460, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1339128, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15874, "largest_seqno": 17908, "table_properties": {"data_size": 1332168, "index_size": 3586, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 19200, "raw_average_key_size": 21, "raw_value_size": 1316149, "raw_average_value_size": 1447, "num_data_blocks": 164, "num_entries": 909, "num_filter_entries": 909, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769679265, "oldest_key_time": 1769679265, "file_creation_time": 1769679468, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 9576 microseconds, and 3295 cpu microseconds.
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:37:48.021511) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1339128 bytes OK
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:37:48.021531) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:37:48.023355) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:37:48.023368) EVENT_LOG_v1 {"time_micros": 1769679468023364, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:37:48.023391) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2176354, prev total WAL file size 2176354, number of live WAL files 2.
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:37:48.023883) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373538' seq:0, type:0; will stop at (end)
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1307KB)], [38(5690KB)]
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679468023973, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 7165706, "oldest_snapshot_seqno": -1}
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4097 keys, 5809206 bytes, temperature: kUnknown
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679468080380, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 5809206, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5780702, "index_size": 17130, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10309, "raw_key_size": 96705, "raw_average_key_size": 23, "raw_value_size": 5706124, "raw_average_value_size": 1392, "num_data_blocks": 739, "num_entries": 4097, "num_filter_entries": 4097, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677896, "oldest_key_time": 0, "file_creation_time": 1769679468, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:37:48.080698) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 5809206 bytes
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:37:48.083498) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.9 rd, 102.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 5.6 +0.0 blob) out(5.5 +0.0 blob), read-write-amplify(9.7) write-amplify(4.3) OK, records in: 4530, records dropped: 433 output_compression: NoCompression
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:37:48.083527) EVENT_LOG_v1 {"time_micros": 1769679468083512, "job": 18, "event": "compaction_finished", "compaction_time_micros": 56481, "compaction_time_cpu_micros": 19294, "output_level": 6, "num_output_files": 1, "total_output_size": 5809206, "num_input_records": 4530, "num_output_records": 4097, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679468083840, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679468084650, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:37:48.023782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:37:48.084807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:37:48.084812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:37:48.084814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:37:48.084816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:37:48 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:37:48.084818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:37:48 compute-0 ceph-mon[75183]: pgmap v859: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:48 compute-0 nova_compute[236255]: 2026-01-29 09:37:48.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:37:48 compute-0 nova_compute[236255]: 2026-01-29 09:37:48.556 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:37:48 compute-0 nova_compute[236255]: 2026-01-29 09:37:48.592 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:37:48 compute-0 nova_compute[236255]: 2026-01-29 09:37:48.592 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:37:48 compute-0 nova_compute[236255]: 2026-01-29 09:37:48.592 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:37:48 compute-0 nova_compute[236255]: 2026-01-29 09:37:48.592 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 29 09:37:48 compute-0 nova_compute[236255]: 2026-01-29 09:37:48.593 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:37:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v860: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:37:49 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2360679678' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:37:49 compute-0 nova_compute[236255]: 2026-01-29 09:37:49.111 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:37:49 compute-0 nova_compute[236255]: 2026-01-29 09:37:49.317 236262 WARNING nova.virt.libvirt.driver [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 29 09:37:49 compute-0 nova_compute[236255]: 2026-01-29 09:37:49.319 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5080MB free_disk=59.98826471157372GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 29 09:37:49 compute-0 nova_compute[236255]: 2026-01-29 09:37:49.319 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:37:49 compute-0 nova_compute[236255]: 2026-01-29 09:37:49.319 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:37:49 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2360679678' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:37:49 compute-0 nova_compute[236255]: 2026-01-29 09:37:49.401 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 29 09:37:49 compute-0 nova_compute[236255]: 2026-01-29 09:37:49.402 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 29 09:37:49 compute-0 nova_compute[236255]: 2026-01-29 09:37:49.419 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:37:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:37:49 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4216084861' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:37:49 compute-0 nova_compute[236255]: 2026-01-29 09:37:49.942 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:37:49 compute-0 nova_compute[236255]: 2026-01-29 09:37:49.947 236262 DEBUG nova.compute.provider_tree [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed in ProviderTree for provider: 2689825d-8fa0-473a-adf1-5005faba9bec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 29 09:37:49 compute-0 nova_compute[236255]: 2026-01-29 09:37:49.969 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed for provider 2689825d-8fa0-473a-adf1-5005faba9bec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 29 09:37:49 compute-0 nova_compute[236255]: 2026-01-29 09:37:49.970 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 29 09:37:49 compute-0 nova_compute[236255]: 2026-01-29 09:37:49.971 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:37:50 compute-0 ceph-mon[75183]: pgmap v860: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:50 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/4216084861' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:37:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v861: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:50 compute-0 nova_compute[236255]: 2026-01-29 09:37:50.970 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:37:50 compute-0 nova_compute[236255]: 2026-01-29 09:37:50.971 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:37:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 09:37:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2712212079' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:37:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 09:37:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2712212079' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:37:51 compute-0 nova_compute[236255]: 2026-01-29 09:37:51.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:37:51 compute-0 nova_compute[236255]: 2026-01-29 09:37:51.556 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 29 09:37:51 compute-0 nova_compute[236255]: 2026-01-29 09:37:51.556 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 29 09:37:51 compute-0 nova_compute[236255]: 2026-01-29 09:37:51.571 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 29 09:37:51 compute-0 nova_compute[236255]: 2026-01-29 09:37:51.571 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:37:52 compute-0 ceph-mon[75183]: pgmap v861: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/2712212079' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:37:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/2712212079' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:37:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v862: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:37:54 compute-0 podman[250388]: 2026-01-29 09:37:54.169821064 +0000 UTC m=+0.107444253 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:37:54 compute-0 ceph-mon[75183]: pgmap v862: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v863: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:37:56
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'backups', 'images', '.mgr', 'volumes', 'cephfs.cephfs.meta']
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:37:56 compute-0 ceph-mon[75183]: pgmap v863: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:37:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v864: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:57 compute-0 podman[250414]: 2026-01-29 09:37:57.13192518 +0000 UTC m=+0.074280322 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 29 09:37:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:37:58 compute-0 ceph-mon[75183]: pgmap v864: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:37:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v865: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:00 compute-0 ceph-mon[75183]: pgmap v865: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v866: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:38:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:38:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:38:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:38:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.756942845403104e-07 of space, bias 1.0, pg target 8.270828536209312e-05 quantized to 32 (current 32)
Jan 29 09:38:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:38:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.5000327611377854e-07 of space, bias 1.0, pg target 4.500098283413356e-05 quantized to 32 (current 32)
Jan 29 09:38:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:38:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:38:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:38:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006683123502180493 of space, bias 1.0, pg target 0.2004937050654148 quantized to 32 (current 32)
Jan 29 09:38:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:38:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2051960589356773e-06 of space, bias 4.0, pg target 0.0014462352707228128 quantized to 16 (current 32)
Jan 29 09:38:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:38:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:38:02 compute-0 ceph-mon[75183]: pgmap v866: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v867: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:38:03.447264) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679483447298, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 378, "num_deletes": 251, "total_data_size": 163259, "memory_usage": 170224, "flush_reason": "Manual Compaction"}
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679483450532, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 161095, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17909, "largest_seqno": 18286, "table_properties": {"data_size": 158834, "index_size": 425, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5661, "raw_average_key_size": 18, "raw_value_size": 154342, "raw_average_value_size": 506, "num_data_blocks": 20, "num_entries": 305, "num_filter_entries": 305, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769679468, "oldest_key_time": 1769679468, "file_creation_time": 1769679483, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 3331 microseconds, and 1056 cpu microseconds.
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:38:03.450591) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 161095 bytes OK
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:38:03.450612) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:38:03.452750) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:38:03.452768) EVENT_LOG_v1 {"time_micros": 1769679483452762, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:38:03.452790) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 160794, prev total WAL file size 160794, number of live WAL files 2.
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:38:03.453166) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(157KB)], [41(5673KB)]
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679483453190, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 5970301, "oldest_snapshot_seqno": -1}
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 3893 keys, 4793859 bytes, temperature: kUnknown
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679483475656, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 4793859, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4768212, "index_size": 14786, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9797, "raw_key_size": 93147, "raw_average_key_size": 23, "raw_value_size": 4698641, "raw_average_value_size": 1206, "num_data_blocks": 632, "num_entries": 3893, "num_filter_entries": 3893, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677896, "oldest_key_time": 0, "file_creation_time": 1769679483, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:38:03.475890) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 4793859 bytes
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:38:03.477184) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 264.7 rd, 212.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 5.5 +0.0 blob) out(4.6 +0.0 blob), read-write-amplify(66.8) write-amplify(29.8) OK, records in: 4402, records dropped: 509 output_compression: NoCompression
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:38:03.477203) EVENT_LOG_v1 {"time_micros": 1769679483477194, "job": 20, "event": "compaction_finished", "compaction_time_micros": 22558, "compaction_time_cpu_micros": 9116, "output_level": 6, "num_output_files": 1, "total_output_size": 4793859, "num_input_records": 4402, "num_output_records": 3893, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679483477338, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679483478078, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:38:03.453072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:38:03.478161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:38:03.478165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:38:03.478167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:38:03.478169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:38:03 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:38:03.478171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:38:04 compute-0 ceph-mon[75183]: pgmap v867: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v868: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:06 compute-0 ceph-mon[75183]: pgmap v868: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v869: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:38:08 compute-0 ceph-mon[75183]: pgmap v869: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v870: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:38:09.042 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:38:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:38:09.042 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:38:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:38:09.043 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:38:10 compute-0 ceph-mon[75183]: pgmap v870: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v871: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:12 compute-0 ceph-mon[75183]: pgmap v871: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v872: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:38:13 compute-0 ceph-mon[75183]: pgmap v872: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v873: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:15 compute-0 ceph-mon[75183]: pgmap v873: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v874: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:17 compute-0 ceph-mon[75183]: pgmap v874: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:38:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v875: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:19 compute-0 ceph-mon[75183]: pgmap v875: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v876: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:21 compute-0 ceph-mon[75183]: pgmap v876: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v877: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:38:23 compute-0 sudo[250433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:38:23 compute-0 sudo[250433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:38:23 compute-0 sudo[250433]: pam_unix(sudo:session): session closed for user root
Jan 29 09:38:23 compute-0 sudo[250458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:38:23 compute-0 sudo[250458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:38:24 compute-0 ceph-mon[75183]: pgmap v877: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:24 compute-0 sudo[250458]: pam_unix(sudo:session): session closed for user root
Jan 29 09:38:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:38:24 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:38:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:38:24 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:38:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:38:24 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:38:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:38:24 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:38:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:38:24 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:38:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:38:24 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:38:24 compute-0 sudo[250514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:38:24 compute-0 sudo[250514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:38:24 compute-0 sudo[250514]: pam_unix(sudo:session): session closed for user root
Jan 29 09:38:24 compute-0 sudo[250545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:38:24 compute-0 sudo[250545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:38:24 compute-0 podman[250538]: 2026-01-29 09:38:24.601909577 +0000 UTC m=+0.067243749 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 29 09:38:24 compute-0 podman[250604]: 2026-01-29 09:38:24.852395359 +0000 UTC m=+0.042314593 container create 7b83b58bfccf997ccfa3a047cf724c76f8e5ead319bf732678b978f241af331c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_murdock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:38:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v878: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:24 compute-0 systemd[1]: Started libpod-conmon-7b83b58bfccf997ccfa3a047cf724c76f8e5ead319bf732678b978f241af331c.scope.
Jan 29 09:38:24 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:38:24 compute-0 podman[250604]: 2026-01-29 09:38:24.923742979 +0000 UTC m=+0.113662273 container init 7b83b58bfccf997ccfa3a047cf724c76f8e5ead319bf732678b978f241af331c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_murdock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:38:24 compute-0 podman[250604]: 2026-01-29 09:38:24.930370452 +0000 UTC m=+0.120289686 container start 7b83b58bfccf997ccfa3a047cf724c76f8e5ead319bf732678b978f241af331c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 29 09:38:24 compute-0 podman[250604]: 2026-01-29 09:38:24.83493669 +0000 UTC m=+0.024855954 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:38:24 compute-0 podman[250604]: 2026-01-29 09:38:24.934063843 +0000 UTC m=+0.123983137 container attach 7b83b58bfccf997ccfa3a047cf724c76f8e5ead319bf732678b978f241af331c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_murdock, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 29 09:38:24 compute-0 ecstatic_murdock[250620]: 167 167
Jan 29 09:38:24 compute-0 systemd[1]: libpod-7b83b58bfccf997ccfa3a047cf724c76f8e5ead319bf732678b978f241af331c.scope: Deactivated successfully.
Jan 29 09:38:24 compute-0 conmon[250620]: conmon 7b83b58bfccf997ccfa3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b83b58bfccf997ccfa3a047cf724c76f8e5ead319bf732678b978f241af331c.scope/container/memory.events
Jan 29 09:38:24 compute-0 podman[250604]: 2026-01-29 09:38:24.936070308 +0000 UTC m=+0.125989532 container died 7b83b58bfccf997ccfa3a047cf724c76f8e5ead319bf732678b978f241af331c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Jan 29 09:38:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ad99efabd45f5cf28b74cbd2beefded41d408eacb4854efce5d5de91c39b43b-merged.mount: Deactivated successfully.
Jan 29 09:38:24 compute-0 podman[250604]: 2026-01-29 09:38:24.976390446 +0000 UTC m=+0.166309680 container remove 7b83b58bfccf997ccfa3a047cf724c76f8e5ead319bf732678b978f241af331c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:38:24 compute-0 systemd[1]: libpod-conmon-7b83b58bfccf997ccfa3a047cf724c76f8e5ead319bf732678b978f241af331c.scope: Deactivated successfully.
Jan 29 09:38:25 compute-0 podman[250643]: 2026-01-29 09:38:25.100070584 +0000 UTC m=+0.035603579 container create 01d0aa0bd5cc860ab29415a0c296b6458372488a722e5f01a297e22ff2485c9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:38:25 compute-0 systemd[1]: Started libpod-conmon-01d0aa0bd5cc860ab29415a0c296b6458372488a722e5f01a297e22ff2485c9a.scope.
Jan 29 09:38:25 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:38:25 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:38:25 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:38:25 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:38:25 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:38:25 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:38:25 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85de87c28251c0ab035a8a984c30eb1d3f54ec6b17b83dc597c0b1190941efbc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85de87c28251c0ab035a8a984c30eb1d3f54ec6b17b83dc597c0b1190941efbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85de87c28251c0ab035a8a984c30eb1d3f54ec6b17b83dc597c0b1190941efbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85de87c28251c0ab035a8a984c30eb1d3f54ec6b17b83dc597c0b1190941efbc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85de87c28251c0ab035a8a984c30eb1d3f54ec6b17b83dc597c0b1190941efbc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:38:25 compute-0 podman[250643]: 2026-01-29 09:38:25.084414424 +0000 UTC m=+0.019947439 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:38:25 compute-0 podman[250643]: 2026-01-29 09:38:25.190712345 +0000 UTC m=+0.126245370 container init 01d0aa0bd5cc860ab29415a0c296b6458372488a722e5f01a297e22ff2485c9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_liskov, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:38:25 compute-0 podman[250643]: 2026-01-29 09:38:25.197022688 +0000 UTC m=+0.132555673 container start 01d0aa0bd5cc860ab29415a0c296b6458372488a722e5f01a297e22ff2485c9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:38:25 compute-0 podman[250643]: 2026-01-29 09:38:25.199974689 +0000 UTC m=+0.135507684 container attach 01d0aa0bd5cc860ab29415a0c296b6458372488a722e5f01a297e22ff2485c9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Jan 29 09:38:25 compute-0 angry_liskov[250660]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:38:25 compute-0 angry_liskov[250660]: --> All data devices are unavailable
Jan 29 09:38:25 compute-0 systemd[1]: libpod-01d0aa0bd5cc860ab29415a0c296b6458372488a722e5f01a297e22ff2485c9a.scope: Deactivated successfully.
Jan 29 09:38:25 compute-0 podman[250643]: 2026-01-29 09:38:25.635990449 +0000 UTC m=+0.571523444 container died 01d0aa0bd5cc860ab29415a0c296b6458372488a722e5f01a297e22ff2485c9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Jan 29 09:38:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-85de87c28251c0ab035a8a984c30eb1d3f54ec6b17b83dc597c0b1190941efbc-merged.mount: Deactivated successfully.
Jan 29 09:38:25 compute-0 podman[250643]: 2026-01-29 09:38:25.679223577 +0000 UTC m=+0.614756582 container remove 01d0aa0bd5cc860ab29415a0c296b6458372488a722e5f01a297e22ff2485c9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_liskov, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 29 09:38:25 compute-0 systemd[1]: libpod-conmon-01d0aa0bd5cc860ab29415a0c296b6458372488a722e5f01a297e22ff2485c9a.scope: Deactivated successfully.
Jan 29 09:38:25 compute-0 sudo[250545]: pam_unix(sudo:session): session closed for user root
Jan 29 09:38:25 compute-0 sudo[250692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:38:25 compute-0 sudo[250692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:38:25 compute-0 sudo[250692]: pam_unix(sudo:session): session closed for user root
Jan 29 09:38:25 compute-0 sudo[250717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:38:25 compute-0 sudo[250717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:38:26 compute-0 podman[250755]: 2026-01-29 09:38:26.119992757 +0000 UTC m=+0.041895462 container create d6560bbfc8ffbc74aea830d122d03810a748d6bc5e0b59e6349073d98e568fd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wilson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:38:26 compute-0 systemd[1]: Started libpod-conmon-d6560bbfc8ffbc74aea830d122d03810a748d6bc5e0b59e6349073d98e568fd2.scope.
Jan 29 09:38:26 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:38:26 compute-0 podman[250755]: 2026-01-29 09:38:26.1918091 +0000 UTC m=+0.113711805 container init d6560bbfc8ffbc74aea830d122d03810a748d6bc5e0b59e6349073d98e568fd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:38:26 compute-0 podman[250755]: 2026-01-29 09:38:26.100304516 +0000 UTC m=+0.022207221 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:38:26 compute-0 podman[250755]: 2026-01-29 09:38:26.197721483 +0000 UTC m=+0.119624168 container start d6560bbfc8ffbc74aea830d122d03810a748d6bc5e0b59e6349073d98e568fd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wilson, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:38:26 compute-0 podman[250755]: 2026-01-29 09:38:26.201644941 +0000 UTC m=+0.123547646 container attach d6560bbfc8ffbc74aea830d122d03810a748d6bc5e0b59e6349073d98e568fd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wilson, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 29 09:38:26 compute-0 nostalgic_wilson[250771]: 167 167
Jan 29 09:38:26 compute-0 systemd[1]: libpod-d6560bbfc8ffbc74aea830d122d03810a748d6bc5e0b59e6349073d98e568fd2.scope: Deactivated successfully.
Jan 29 09:38:26 compute-0 podman[250755]: 2026-01-29 09:38:26.205671531 +0000 UTC m=+0.127574276 container died d6560bbfc8ffbc74aea830d122d03810a748d6bc5e0b59e6349073d98e568fd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wilson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 29 09:38:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fabb007b9361ff8c832ae5af7331b423bcf1b8f33a2bcb5b02b822099a0ca3b-merged.mount: Deactivated successfully.
Jan 29 09:38:26 compute-0 podman[250755]: 2026-01-29 09:38:26.242464112 +0000 UTC m=+0.164366797 container remove d6560bbfc8ffbc74aea830d122d03810a748d6bc5e0b59e6349073d98e568fd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 09:38:26 compute-0 systemd[1]: libpod-conmon-d6560bbfc8ffbc74aea830d122d03810a748d6bc5e0b59e6349073d98e568fd2.scope: Deactivated successfully.
Jan 29 09:38:26 compute-0 podman[250794]: 2026-01-29 09:38:26.406645103 +0000 UTC m=+0.042399596 container create a966fa9b5d46f5150a4ec57acdf7e45caf7a8d07bfe33791f5eafae3599245e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 09:38:26 compute-0 systemd[1]: Started libpod-conmon-a966fa9b5d46f5150a4ec57acdf7e45caf7a8d07bfe33791f5eafae3599245e6.scope.
Jan 29 09:38:26 compute-0 ceph-mon[75183]: pgmap v878: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:26 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18069742ea0de6c84efab89558b1938de39b28a1d77203d4ccc31d974099343e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18069742ea0de6c84efab89558b1938de39b28a1d77203d4ccc31d974099343e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18069742ea0de6c84efab89558b1938de39b28a1d77203d4ccc31d974099343e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18069742ea0de6c84efab89558b1938de39b28a1d77203d4ccc31d974099343e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:38:26 compute-0 podman[250794]: 2026-01-29 09:38:26.387650081 +0000 UTC m=+0.023404564 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:38:26 compute-0 podman[250794]: 2026-01-29 09:38:26.49423236 +0000 UTC m=+0.129986873 container init a966fa9b5d46f5150a4ec57acdf7e45caf7a8d07bfe33791f5eafae3599245e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 29 09:38:26 compute-0 podman[250794]: 2026-01-29 09:38:26.5033428 +0000 UTC m=+0.139097303 container start a966fa9b5d46f5150a4ec57acdf7e45caf7a8d07bfe33791f5eafae3599245e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 29 09:38:26 compute-0 podman[250794]: 2026-01-29 09:38:26.511029161 +0000 UTC m=+0.146783714 container attach a966fa9b5d46f5150a4ec57acdf7e45caf7a8d07bfe33791f5eafae3599245e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_keldysh, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:38:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:38:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:38:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:38:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:38:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:38:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]: {
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:     "0": [
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:         {
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "devices": [
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "/dev/loop3"
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             ],
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "lv_name": "ceph_lv0",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "lv_size": "21470642176",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "name": "ceph_lv0",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "tags": {
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.cluster_name": "ceph",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.crush_device_class": "",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.encrypted": "0",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.objectstore": "bluestore",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.osd_id": "0",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.type": "block",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.vdo": "0",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.with_tpm": "0"
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             },
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "type": "block",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "vg_name": "ceph_vg0"
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:         }
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:     ],
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:     "1": [
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:         {
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "devices": [
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "/dev/loop4"
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             ],
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "lv_name": "ceph_lv1",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "lv_size": "21470642176",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "name": "ceph_lv1",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "tags": {
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.cluster_name": "ceph",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.crush_device_class": "",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.encrypted": "0",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.objectstore": "bluestore",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.osd_id": "1",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.type": "block",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.vdo": "0",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.with_tpm": "0"
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             },
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "type": "block",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "vg_name": "ceph_vg1"
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:         }
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:     ],
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:     "2": [
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:         {
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "devices": [
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "/dev/loop5"
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             ],
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "lv_name": "ceph_lv2",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "lv_size": "21470642176",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "name": "ceph_lv2",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "tags": {
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.cluster_name": "ceph",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.crush_device_class": "",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.encrypted": "0",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.objectstore": "bluestore",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.osd_id": "2",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.type": "block",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.vdo": "0",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:                 "ceph.with_tpm": "0"
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             },
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "type": "block",
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:             "vg_name": "ceph_vg2"
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:         }
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]:     ]
Jan 29 09:38:26 compute-0 hungry_keldysh[250811]: }
Jan 29 09:38:26 compute-0 systemd[1]: libpod-a966fa9b5d46f5150a4ec57acdf7e45caf7a8d07bfe33791f5eafae3599245e6.scope: Deactivated successfully.
Jan 29 09:38:26 compute-0 podman[250794]: 2026-01-29 09:38:26.819825786 +0000 UTC m=+0.455580269 container died a966fa9b5d46f5150a4ec57acdf7e45caf7a8d07bfe33791f5eafae3599245e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 29 09:38:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-18069742ea0de6c84efab89558b1938de39b28a1d77203d4ccc31d974099343e-merged.mount: Deactivated successfully.
Jan 29 09:38:26 compute-0 podman[250794]: 2026-01-29 09:38:26.856676418 +0000 UTC m=+0.492430881 container remove a966fa9b5d46f5150a4ec57acdf7e45caf7a8d07bfe33791f5eafae3599245e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_keldysh, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:38:26 compute-0 systemd[1]: libpod-conmon-a966fa9b5d46f5150a4ec57acdf7e45caf7a8d07bfe33791f5eafae3599245e6.scope: Deactivated successfully.
Jan 29 09:38:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v879: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:26 compute-0 sudo[250717]: pam_unix(sudo:session): session closed for user root
Jan 29 09:38:26 compute-0 sudo[250831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:38:26 compute-0 sudo[250831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:38:26 compute-0 sudo[250831]: pam_unix(sudo:session): session closed for user root
Jan 29 09:38:26 compute-0 sudo[250856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:38:26 compute-0 sudo[250856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:38:27 compute-0 podman[250893]: 2026-01-29 09:38:27.227078065 +0000 UTC m=+0.036224126 container create 5792e7daff3a933b3f8affe600edd4d82682768a3dbf7d1ccf2a08bb09e42abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_fermi, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:38:27 compute-0 systemd[1]: Started libpod-conmon-5792e7daff3a933b3f8affe600edd4d82682768a3dbf7d1ccf2a08bb09e42abd.scope.
Jan 29 09:38:27 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:38:27 compute-0 podman[250893]: 2026-01-29 09:38:27.277648125 +0000 UTC m=+0.086794206 container init 5792e7daff3a933b3f8affe600edd4d82682768a3dbf7d1ccf2a08bb09e42abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:38:27 compute-0 podman[250893]: 2026-01-29 09:38:27.283988819 +0000 UTC m=+0.093134880 container start 5792e7daff3a933b3f8affe600edd4d82682768a3dbf7d1ccf2a08bb09e42abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:38:27 compute-0 distracted_fermi[250910]: 167 167
Jan 29 09:38:27 compute-0 systemd[1]: libpod-5792e7daff3a933b3f8affe600edd4d82682768a3dbf7d1ccf2a08bb09e42abd.scope: Deactivated successfully.
Jan 29 09:38:27 compute-0 podman[250893]: 2026-01-29 09:38:27.288519723 +0000 UTC m=+0.097665784 container attach 5792e7daff3a933b3f8affe600edd4d82682768a3dbf7d1ccf2a08bb09e42abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 09:38:27 compute-0 podman[250893]: 2026-01-29 09:38:27.289160031 +0000 UTC m=+0.098306122 container died 5792e7daff3a933b3f8affe600edd4d82682768a3dbf7d1ccf2a08bb09e42abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:38:27 compute-0 podman[250893]: 2026-01-29 09:38:27.210668374 +0000 UTC m=+0.019814465 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:38:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e96b30616366fa9388ff3b841249bb4e2aa895eec309db4a97a53e4402228a6e-merged.mount: Deactivated successfully.
Jan 29 09:38:27 compute-0 podman[250893]: 2026-01-29 09:38:27.324632815 +0000 UTC m=+0.133778876 container remove 5792e7daff3a933b3f8affe600edd4d82682768a3dbf7d1ccf2a08bb09e42abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 29 09:38:27 compute-0 systemd[1]: libpod-conmon-5792e7daff3a933b3f8affe600edd4d82682768a3dbf7d1ccf2a08bb09e42abd.scope: Deactivated successfully.
Jan 29 09:38:27 compute-0 podman[250907]: 2026-01-29 09:38:27.380023907 +0000 UTC m=+0.119293818 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 29 09:38:27 compute-0 podman[250952]: 2026-01-29 09:38:27.445351662 +0000 UTC m=+0.039182117 container create f97ce82af392a13781a130756af962cfa4fa35766df074c48d2717614f0849fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_rubin, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 09:38:27 compute-0 systemd[1]: Started libpod-conmon-f97ce82af392a13781a130756af962cfa4fa35766df074c48d2717614f0849fc.scope.
Jan 29 09:38:27 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f956ca8ebffc08da0b599f0a2bca30721f257b09a4b508ec1d187ca20fdf43d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f956ca8ebffc08da0b599f0a2bca30721f257b09a4b508ec1d187ca20fdf43d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f956ca8ebffc08da0b599f0a2bca30721f257b09a4b508ec1d187ca20fdf43d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f956ca8ebffc08da0b599f0a2bca30721f257b09a4b508ec1d187ca20fdf43d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:38:27 compute-0 podman[250952]: 2026-01-29 09:38:27.429078745 +0000 UTC m=+0.022909230 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:38:27 compute-0 podman[250952]: 2026-01-29 09:38:27.52679056 +0000 UTC m=+0.120621015 container init f97ce82af392a13781a130756af962cfa4fa35766df074c48d2717614f0849fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_rubin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 09:38:27 compute-0 podman[250952]: 2026-01-29 09:38:27.534095831 +0000 UTC m=+0.127926286 container start f97ce82af392a13781a130756af962cfa4fa35766df074c48d2717614f0849fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 29 09:38:27 compute-0 podman[250952]: 2026-01-29 09:38:27.538775289 +0000 UTC m=+0.132605744 container attach f97ce82af392a13781a130756af962cfa4fa35766df074c48d2717614f0849fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 29 09:38:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:38:28 compute-0 lvm[251046]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:38:28 compute-0 lvm[251047]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:38:28 compute-0 lvm[251046]: VG ceph_vg0 finished
Jan 29 09:38:28 compute-0 lvm[251047]: VG ceph_vg1 finished
Jan 29 09:38:28 compute-0 lvm[251049]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:38:28 compute-0 lvm[251049]: VG ceph_vg2 finished
Jan 29 09:38:28 compute-0 dazzling_rubin[250967]: {}
Jan 29 09:38:28 compute-0 systemd[1]: libpod-f97ce82af392a13781a130756af962cfa4fa35766df074c48d2717614f0849fc.scope: Deactivated successfully.
Jan 29 09:38:28 compute-0 systemd[1]: libpod-f97ce82af392a13781a130756af962cfa4fa35766df074c48d2717614f0849fc.scope: Consumed 1.060s CPU time.
Jan 29 09:38:28 compute-0 podman[250952]: 2026-01-29 09:38:28.299280064 +0000 UTC m=+0.893110589 container died f97ce82af392a13781a130756af962cfa4fa35766df074c48d2717614f0849fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_rubin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 29 09:38:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f956ca8ebffc08da0b599f0a2bca30721f257b09a4b508ec1d187ca20fdf43d-merged.mount: Deactivated successfully.
Jan 29 09:38:28 compute-0 podman[250952]: 2026-01-29 09:38:28.363364874 +0000 UTC m=+0.957195329 container remove f97ce82af392a13781a130756af962cfa4fa35766df074c48d2717614f0849fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:38:28 compute-0 systemd[1]: libpod-conmon-f97ce82af392a13781a130756af962cfa4fa35766df074c48d2717614f0849fc.scope: Deactivated successfully.
Jan 29 09:38:28 compute-0 sudo[250856]: pam_unix(sudo:session): session closed for user root
Jan 29 09:38:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:38:28 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:38:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:38:28 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:38:28 compute-0 ceph-mon[75183]: pgmap v879: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:28 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:38:28 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:38:28 compute-0 sudo[251066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:38:28 compute-0 sudo[251066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:38:28 compute-0 sudo[251066]: pam_unix(sudo:session): session closed for user root
Jan 29 09:38:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v880: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:29 compute-0 ceph-mon[75183]: pgmap v880: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v881: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:31 compute-0 ceph-mon[75183]: pgmap v881: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v882: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:38:33 compute-0 ceph-mon[75183]: pgmap v882: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v883: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:35 compute-0 ceph-mon[75183]: pgmap v883: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v884: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:37 compute-0 ceph-mon[75183]: pgmap v884: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:38:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v885: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:39 compute-0 ceph-mon[75183]: pgmap v885: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v886: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:42 compute-0 ceph-mon[75183]: pgmap v886: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v887: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:38:43 compute-0 nova_compute[236255]: 2026-01-29 09:38:43.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:38:44 compute-0 ceph-mon[75183]: pgmap v887: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v888: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:45 compute-0 nova_compute[236255]: 2026-01-29 09:38:45.227 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:38:45 compute-0 nova_compute[236255]: 2026-01-29 09:38:45.227 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 29 09:38:46 compute-0 ceph-mon[75183]: pgmap v888: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v889: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:47 compute-0 nova_compute[236255]: 2026-01-29 09:38:47.556 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:38:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:38:48 compute-0 ceph-mon[75183]: pgmap v889: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:48 compute-0 nova_compute[236255]: 2026-01-29 09:38:48.550 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:38:48 compute-0 nova_compute[236255]: 2026-01-29 09:38:48.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:38:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v890: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:50 compute-0 ceph-mon[75183]: pgmap v890: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:50 compute-0 nova_compute[236255]: 2026-01-29 09:38:50.550 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:38:50 compute-0 nova_compute[236255]: 2026-01-29 09:38:50.579 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:38:50 compute-0 nova_compute[236255]: 2026-01-29 09:38:50.580 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:38:50 compute-0 nova_compute[236255]: 2026-01-29 09:38:50.606 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:38:50 compute-0 nova_compute[236255]: 2026-01-29 09:38:50.607 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:38:50 compute-0 nova_compute[236255]: 2026-01-29 09:38:50.607 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:38:50 compute-0 nova_compute[236255]: 2026-01-29 09:38:50.607 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 29 09:38:50 compute-0 nova_compute[236255]: 2026-01-29 09:38:50.607 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:38:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v891: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:38:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1966027614' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:38:51 compute-0 nova_compute[236255]: 2026-01-29 09:38:51.167 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:38:51 compute-0 nova_compute[236255]: 2026-01-29 09:38:51.339 236262 WARNING nova.virt.libvirt.driver [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 29 09:38:51 compute-0 nova_compute[236255]: 2026-01-29 09:38:51.340 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5105MB free_disk=59.98826471157372GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 29 09:38:51 compute-0 nova_compute[236255]: 2026-01-29 09:38:51.340 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:38:51 compute-0 nova_compute[236255]: 2026-01-29 09:38:51.341 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:38:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 09:38:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3650635819' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:38:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 09:38:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3650635819' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:38:51 compute-0 nova_compute[236255]: 2026-01-29 09:38:51.618 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 29 09:38:51 compute-0 nova_compute[236255]: 2026-01-29 09:38:51.618 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 29 09:38:51 compute-0 nova_compute[236255]: 2026-01-29 09:38:51.711 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Refreshing inventories for resource provider 2689825d-8fa0-473a-adf1-5005faba9bec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 29 09:38:51 compute-0 nova_compute[236255]: 2026-01-29 09:38:51.814 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Updating ProviderTree inventory for provider 2689825d-8fa0-473a-adf1-5005faba9bec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 29 09:38:51 compute-0 nova_compute[236255]: 2026-01-29 09:38:51.815 236262 DEBUG nova.compute.provider_tree [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Updating inventory in ProviderTree for provider 2689825d-8fa0-473a-adf1-5005faba9bec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 29 09:38:51 compute-0 nova_compute[236255]: 2026-01-29 09:38:51.843 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Refreshing aggregate associations for resource provider 2689825d-8fa0-473a-adf1-5005faba9bec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 29 09:38:51 compute-0 nova_compute[236255]: 2026-01-29 09:38:51.875 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Refreshing trait associations for resource provider 2689825d-8fa0-473a-adf1-5005faba9bec, traits: HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 29 09:38:51 compute-0 nova_compute[236255]: 2026-01-29 09:38:51.899 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:38:52 compute-0 ceph-mon[75183]: pgmap v891: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1966027614' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:38:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/3650635819' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:38:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/3650635819' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:38:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:38:52 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2546563514' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:38:52 compute-0 nova_compute[236255]: 2026-01-29 09:38:52.428 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:38:52 compute-0 nova_compute[236255]: 2026-01-29 09:38:52.434 236262 DEBUG nova.compute.provider_tree [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed in ProviderTree for provider: 2689825d-8fa0-473a-adf1-5005faba9bec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 29 09:38:52 compute-0 nova_compute[236255]: 2026-01-29 09:38:52.449 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed for provider 2689825d-8fa0-473a-adf1-5005faba9bec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 29 09:38:52 compute-0 nova_compute[236255]: 2026-01-29 09:38:52.450 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 29 09:38:52 compute-0 nova_compute[236255]: 2026-01-29 09:38:52.451 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:38:52 compute-0 nova_compute[236255]: 2026-01-29 09:38:52.451 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:38:52 compute-0 nova_compute[236255]: 2026-01-29 09:38:52.451 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 29 09:38:52 compute-0 nova_compute[236255]: 2026-01-29 09:38:52.566 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:38:52 compute-0 nova_compute[236255]: 2026-01-29 09:38:52.567 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 29 09:38:52 compute-0 nova_compute[236255]: 2026-01-29 09:38:52.567 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 29 09:38:52 compute-0 nova_compute[236255]: 2026-01-29 09:38:52.586 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 29 09:38:52 compute-0 nova_compute[236255]: 2026-01-29 09:38:52.586 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:38:52 compute-0 nova_compute[236255]: 2026-01-29 09:38:52.587 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:38:52 compute-0 nova_compute[236255]: 2026-01-29 09:38:52.587 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 29 09:38:52 compute-0 nova_compute[236255]: 2026-01-29 09:38:52.600 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 29 09:38:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v892: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:38:53 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2546563514' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:38:53 compute-0 nova_compute[236255]: 2026-01-29 09:38:53.569 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:38:54 compute-0 ceph-mon[75183]: pgmap v892: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v893: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:55 compute-0 podman[251135]: 2026-01-29 09:38:55.157089761 +0000 UTC m=+0.090083446 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:38:56
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['backups', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', '.mgr', 'volumes']
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:38:56 compute-0 ceph-mon[75183]: pgmap v893: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:38:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v894: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:38:58 compute-0 ceph-mon[75183]: pgmap v894: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:38:58 compute-0 podman[251160]: 2026-01-29 09:38:58.127847981 +0000 UTC m=+0.072240359 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 29 09:38:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v895: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:00 compute-0 ceph-mon[75183]: pgmap v895: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v896: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:39:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:39:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:39:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:39:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.756942845403104e-07 of space, bias 1.0, pg target 8.270828536209312e-05 quantized to 32 (current 32)
Jan 29 09:39:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:39:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.5000327611377854e-07 of space, bias 1.0, pg target 4.500098283413356e-05 quantized to 32 (current 32)
Jan 29 09:39:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:39:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:39:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:39:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006683123502180493 of space, bias 1.0, pg target 0.2004937050654148 quantized to 32 (current 32)
Jan 29 09:39:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:39:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2051960589356773e-06 of space, bias 4.0, pg target 0.0014462352707228128 quantized to 16 (current 32)
Jan 29 09:39:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:39:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:39:02 compute-0 ceph-mon[75183]: pgmap v896: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v897: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:39:04 compute-0 ceph-mon[75183]: pgmap v897: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v898: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:06 compute-0 ceph-mon[75183]: pgmap v898: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v899: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:39:08 compute-0 ceph-mon[75183]: pgmap v899: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v900: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:39:09.043 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:39:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:39:09.044 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:39:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:39:09.044 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:39:10 compute-0 ceph-mon[75183]: pgmap v900: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v901: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:12 compute-0 ceph-mon[75183]: pgmap v901: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v902: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:39:14 compute-0 ceph-mon[75183]: pgmap v902: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v903: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:16 compute-0 ceph-mon[75183]: pgmap v903: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v904: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:39:18 compute-0 ceph-mon[75183]: pgmap v904: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v905: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:19 compute-0 ceph-mon[75183]: pgmap v905: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v906: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:21 compute-0 ceph-mon[75183]: pgmap v906: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v907: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:39:23 compute-0 ceph-mon[75183]: pgmap v907: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v908: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:26 compute-0 ceph-mon[75183]: pgmap v908: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:26 compute-0 podman[251179]: 2026-01-29 09:39:26.142450126 +0000 UTC m=+0.083502227 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 09:39:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:39:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:39:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:39:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:39:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:39:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:39:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v909: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:28 compute-0 ceph-mon[75183]: pgmap v909: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:39:28.032549) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679568032604, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 904, "num_deletes": 257, "total_data_size": 832604, "memory_usage": 849176, "flush_reason": "Manual Compaction"}
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679568042550, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 821367, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18287, "largest_seqno": 19190, "table_properties": {"data_size": 816892, "index_size": 2127, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9322, "raw_average_key_size": 18, "raw_value_size": 807849, "raw_average_value_size": 1596, "num_data_blocks": 97, "num_entries": 506, "num_filter_entries": 506, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769679484, "oldest_key_time": 1769679484, "file_creation_time": 1769679568, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 10072 microseconds, and 4593 cpu microseconds.
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:39:28.042614) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 821367 bytes OK
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:39:28.042644) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:39:28.044309) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:39:28.044336) EVENT_LOG_v1 {"time_micros": 1769679568044327, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:39:28.044364) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 828203, prev total WAL file size 828203, number of live WAL files 2.
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:39:28.044874) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(802KB)], [44(4681KB)]
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679568044905, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 5615226, "oldest_snapshot_seqno": -1}
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 3873 keys, 5516904 bytes, temperature: kUnknown
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679568083745, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 5516904, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5489548, "index_size": 16558, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9733, "raw_key_size": 93805, "raw_average_key_size": 24, "raw_value_size": 5418516, "raw_average_value_size": 1399, "num_data_blocks": 706, "num_entries": 3873, "num_filter_entries": 3873, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677896, "oldest_key_time": 0, "file_creation_time": 1769679568, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:39:28.084080) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 5516904 bytes
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:39:28.086079) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.2 rd, 141.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 4.6 +0.0 blob) out(5.3 +0.0 blob), read-write-amplify(13.6) write-amplify(6.7) OK, records in: 4399, records dropped: 526 output_compression: NoCompression
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:39:28.086098) EVENT_LOG_v1 {"time_micros": 1769679568086087, "job": 22, "event": "compaction_finished", "compaction_time_micros": 38953, "compaction_time_cpu_micros": 11309, "output_level": 6, "num_output_files": 1, "total_output_size": 5516904, "num_input_records": 4399, "num_output_records": 3873, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679568086377, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679568087010, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:39:28.044808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:39:28.087102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:39:28.087110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:39:28.087113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:39:28.087116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:39:28 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:39:28.087119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:39:28 compute-0 sudo[251205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:39:28 compute-0 sudo[251205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:39:28 compute-0 sudo[251205]: pam_unix(sudo:session): session closed for user root
Jan 29 09:39:28 compute-0 podman[251229]: 2026-01-29 09:39:28.636770321 +0000 UTC m=+0.049851980 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:39:28 compute-0 sudo[251236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:39:28 compute-0 sudo[251236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:39:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v910: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:29 compute-0 sudo[251236]: pam_unix(sudo:session): session closed for user root
Jan 29 09:39:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:39:29 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:39:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:39:29 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:39:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:39:29 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:39:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:39:29 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:39:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:39:29 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:39:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:39:29 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:39:29 compute-0 sudo[251303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:39:29 compute-0 sudo[251303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:39:29 compute-0 sudo[251303]: pam_unix(sudo:session): session closed for user root
Jan 29 09:39:29 compute-0 sudo[251328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:39:29 compute-0 sudo[251328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:39:29 compute-0 podman[251364]: 2026-01-29 09:39:29.434406689 +0000 UTC m=+0.056156752 container create 45052824803b30450fae8d6b9e6659a7fd8102cd956a412c11373e0bc6f1f5e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_mirzakhani, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:39:29 compute-0 systemd[1]: Started libpod-conmon-45052824803b30450fae8d6b9e6659a7fd8102cd956a412c11373e0bc6f1f5e2.scope.
Jan 29 09:39:29 compute-0 podman[251364]: 2026-01-29 09:39:29.406391585 +0000 UTC m=+0.028141718 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:39:29 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:39:29 compute-0 podman[251364]: 2026-01-29 09:39:29.52287279 +0000 UTC m=+0.144622873 container init 45052824803b30450fae8d6b9e6659a7fd8102cd956a412c11373e0bc6f1f5e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 29 09:39:29 compute-0 podman[251364]: 2026-01-29 09:39:29.531986918 +0000 UTC m=+0.153737171 container start 45052824803b30450fae8d6b9e6659a7fd8102cd956a412c11373e0bc6f1f5e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 09:39:29 compute-0 podman[251364]: 2026-01-29 09:39:29.537017415 +0000 UTC m=+0.158767488 container attach 45052824803b30450fae8d6b9e6659a7fd8102cd956a412c11373e0bc6f1f5e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 09:39:29 compute-0 wizardly_mirzakhani[251380]: 167 167
Jan 29 09:39:29 compute-0 systemd[1]: libpod-45052824803b30450fae8d6b9e6659a7fd8102cd956a412c11373e0bc6f1f5e2.scope: Deactivated successfully.
Jan 29 09:39:29 compute-0 podman[251364]: 2026-01-29 09:39:29.539118192 +0000 UTC m=+0.160868245 container died 45052824803b30450fae8d6b9e6659a7fd8102cd956a412c11373e0bc6f1f5e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_mirzakhani, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 09:39:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-8045bb27c6481c9e9803567862a7a3bb1d880fe680742cc7888ac735c2e88616-merged.mount: Deactivated successfully.
Jan 29 09:39:29 compute-0 podman[251364]: 2026-01-29 09:39:29.589253859 +0000 UTC m=+0.211003912 container remove 45052824803b30450fae8d6b9e6659a7fd8102cd956a412c11373e0bc6f1f5e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 29 09:39:29 compute-0 systemd[1]: libpod-conmon-45052824803b30450fae8d6b9e6659a7fd8102cd956a412c11373e0bc6f1f5e2.scope: Deactivated successfully.
Jan 29 09:39:29 compute-0 podman[251405]: 2026-01-29 09:39:29.726432347 +0000 UTC m=+0.045729387 container create 4b993dcaee68316889386332d7f00eb09ed4af4f63fe6427ffc372aa450551d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 29 09:39:29 compute-0 systemd[1]: Started libpod-conmon-4b993dcaee68316889386332d7f00eb09ed4af4f63fe6427ffc372aa450551d2.scope.
Jan 29 09:39:29 compute-0 podman[251405]: 2026-01-29 09:39:29.702950127 +0000 UTC m=+0.022247197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:39:29 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5e9909e3722d29284a35b5a4410f5c923daf59ef51c068a84c2618dfa83227/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5e9909e3722d29284a35b5a4410f5c923daf59ef51c068a84c2618dfa83227/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5e9909e3722d29284a35b5a4410f5c923daf59ef51c068a84c2618dfa83227/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5e9909e3722d29284a35b5a4410f5c923daf59ef51c068a84c2618dfa83227/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5e9909e3722d29284a35b5a4410f5c923daf59ef51c068a84c2618dfa83227/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:39:29 compute-0 podman[251405]: 2026-01-29 09:39:29.820284285 +0000 UTC m=+0.139581345 container init 4b993dcaee68316889386332d7f00eb09ed4af4f63fe6427ffc372aa450551d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_murdock, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:39:29 compute-0 podman[251405]: 2026-01-29 09:39:29.827578594 +0000 UTC m=+0.146875634 container start 4b993dcaee68316889386332d7f00eb09ed4af4f63fe6427ffc372aa450551d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_murdock, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:39:29 compute-0 podman[251405]: 2026-01-29 09:39:29.830796841 +0000 UTC m=+0.150093901 container attach 4b993dcaee68316889386332d7f00eb09ed4af4f63fe6427ffc372aa450551d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_murdock, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 29 09:39:30 compute-0 ceph-mon[75183]: pgmap v910: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:30 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:39:30 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:39:30 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:39:30 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:39:30 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:39:30 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:39:30 compute-0 unruffled_murdock[251420]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:39:30 compute-0 unruffled_murdock[251420]: --> All data devices are unavailable
Jan 29 09:39:30 compute-0 systemd[1]: libpod-4b993dcaee68316889386332d7f00eb09ed4af4f63fe6427ffc372aa450551d2.scope: Deactivated successfully.
Jan 29 09:39:30 compute-0 podman[251405]: 2026-01-29 09:39:30.27364311 +0000 UTC m=+0.592940150 container died 4b993dcaee68316889386332d7f00eb09ed4af4f63fe6427ffc372aa450551d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_murdock, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:39:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc5e9909e3722d29284a35b5a4410f5c923daf59ef51c068a84c2618dfa83227-merged.mount: Deactivated successfully.
Jan 29 09:39:30 compute-0 podman[251405]: 2026-01-29 09:39:30.316098427 +0000 UTC m=+0.635395467 container remove 4b993dcaee68316889386332d7f00eb09ed4af4f63fe6427ffc372aa450551d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_murdock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 29 09:39:30 compute-0 systemd[1]: libpod-conmon-4b993dcaee68316889386332d7f00eb09ed4af4f63fe6427ffc372aa450551d2.scope: Deactivated successfully.
Jan 29 09:39:30 compute-0 sudo[251328]: pam_unix(sudo:session): session closed for user root
Jan 29 09:39:30 compute-0 sudo[251453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:39:30 compute-0 sudo[251453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:39:30 compute-0 sudo[251453]: pam_unix(sudo:session): session closed for user root
Jan 29 09:39:30 compute-0 sudo[251478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:39:30 compute-0 sudo[251478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:39:30 compute-0 podman[251516]: 2026-01-29 09:39:30.703306929 +0000 UTC m=+0.044060691 container create ea2fb046596d0a33780fd224d4fd9b6a7d27d26ae5fefc4c5cfd66777da90855 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 29 09:39:30 compute-0 systemd[1]: Started libpod-conmon-ea2fb046596d0a33780fd224d4fd9b6a7d27d26ae5fefc4c5cfd66777da90855.scope.
Jan 29 09:39:30 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:39:30 compute-0 podman[251516]: 2026-01-29 09:39:30.776617297 +0000 UTC m=+0.117371049 container init ea2fb046596d0a33780fd224d4fd9b6a7d27d26ae5fefc4c5cfd66777da90855 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hermann, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:39:30 compute-0 podman[251516]: 2026-01-29 09:39:30.781675475 +0000 UTC m=+0.122429197 container start ea2fb046596d0a33780fd224d4fd9b6a7d27d26ae5fefc4c5cfd66777da90855 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hermann, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:39:30 compute-0 podman[251516]: 2026-01-29 09:39:30.687212621 +0000 UTC m=+0.027966363 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:39:30 compute-0 podman[251516]: 2026-01-29 09:39:30.785086208 +0000 UTC m=+0.125839960 container attach ea2fb046596d0a33780fd224d4fd9b6a7d27d26ae5fefc4c5cfd66777da90855 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hermann, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 09:39:30 compute-0 suspicious_hermann[251532]: 167 167
Jan 29 09:39:30 compute-0 systemd[1]: libpod-ea2fb046596d0a33780fd224d4fd9b6a7d27d26ae5fefc4c5cfd66777da90855.scope: Deactivated successfully.
Jan 29 09:39:30 compute-0 podman[251516]: 2026-01-29 09:39:30.786949299 +0000 UTC m=+0.127703041 container died ea2fb046596d0a33780fd224d4fd9b6a7d27d26ae5fefc4c5cfd66777da90855 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hermann, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 29 09:39:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f339967b4bd64e67ed97f9eeacef42da17986413f13c73d42b4748337d35929a-merged.mount: Deactivated successfully.
Jan 29 09:39:30 compute-0 podman[251516]: 2026-01-29 09:39:30.824974415 +0000 UTC m=+0.165728137 container remove ea2fb046596d0a33780fd224d4fd9b6a7d27d26ae5fefc4c5cfd66777da90855 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:39:30 compute-0 systemd[1]: libpod-conmon-ea2fb046596d0a33780fd224d4fd9b6a7d27d26ae5fefc4c5cfd66777da90855.scope: Deactivated successfully.
Jan 29 09:39:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v911: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:30 compute-0 podman[251557]: 2026-01-29 09:39:30.945456088 +0000 UTC m=+0.038127450 container create 4803d9610dab09aafd56e311fbf7f76185a8fff6303e2f52b972aba5de1bff1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_edison, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:39:30 compute-0 systemd[1]: Started libpod-conmon-4803d9610dab09aafd56e311fbf7f76185a8fff6303e2f52b972aba5de1bff1d.scope.
Jan 29 09:39:31 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:39:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a10737b45df1050aba0cd8e75102cbd15a0738c770eba26b7589ace5b0058c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:39:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a10737b45df1050aba0cd8e75102cbd15a0738c770eba26b7589ace5b0058c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:39:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a10737b45df1050aba0cd8e75102cbd15a0738c770eba26b7589ace5b0058c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:39:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a10737b45df1050aba0cd8e75102cbd15a0738c770eba26b7589ace5b0058c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:39:31 compute-0 podman[251557]: 2026-01-29 09:39:30.931674563 +0000 UTC m=+0.024345945 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:39:31 compute-0 podman[251557]: 2026-01-29 09:39:31.029798917 +0000 UTC m=+0.122470299 container init 4803d9610dab09aafd56e311fbf7f76185a8fff6303e2f52b972aba5de1bff1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 09:39:31 compute-0 podman[251557]: 2026-01-29 09:39:31.035523223 +0000 UTC m=+0.128194585 container start 4803d9610dab09aafd56e311fbf7f76185a8fff6303e2f52b972aba5de1bff1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_edison, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 09:39:31 compute-0 podman[251557]: 2026-01-29 09:39:31.039339747 +0000 UTC m=+0.132011129 container attach 4803d9610dab09aafd56e311fbf7f76185a8fff6303e2f52b972aba5de1bff1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 29 09:39:31 compute-0 upbeat_edison[251574]: {
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:     "0": [
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:         {
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "devices": [
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "/dev/loop3"
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             ],
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "lv_name": "ceph_lv0",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "lv_size": "21470642176",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "name": "ceph_lv0",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "tags": {
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.cluster_name": "ceph",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.crush_device_class": "",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.encrypted": "0",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.objectstore": "bluestore",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.osd_id": "0",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.type": "block",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.vdo": "0",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.with_tpm": "0"
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             },
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "type": "block",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "vg_name": "ceph_vg0"
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:         }
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:     ],
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:     "1": [
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:         {
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "devices": [
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "/dev/loop4"
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             ],
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "lv_name": "ceph_lv1",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "lv_size": "21470642176",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "name": "ceph_lv1",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "tags": {
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.cluster_name": "ceph",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.crush_device_class": "",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.encrypted": "0",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.objectstore": "bluestore",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.osd_id": "1",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.type": "block",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.vdo": "0",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.with_tpm": "0"
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             },
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "type": "block",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "vg_name": "ceph_vg1"
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:         }
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:     ],
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:     "2": [
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:         {
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "devices": [
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "/dev/loop5"
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             ],
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "lv_name": "ceph_lv2",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "lv_size": "21470642176",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "name": "ceph_lv2",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "tags": {
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.cluster_name": "ceph",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.crush_device_class": "",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.encrypted": "0",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.objectstore": "bluestore",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.osd_id": "2",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.type": "block",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.vdo": "0",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:                 "ceph.with_tpm": "0"
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             },
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "type": "block",
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:             "vg_name": "ceph_vg2"
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:         }
Jan 29 09:39:31 compute-0 upbeat_edison[251574]:     ]
Jan 29 09:39:31 compute-0 upbeat_edison[251574]: }
Jan 29 09:39:31 compute-0 systemd[1]: libpod-4803d9610dab09aafd56e311fbf7f76185a8fff6303e2f52b972aba5de1bff1d.scope: Deactivated successfully.
Jan 29 09:39:31 compute-0 podman[251557]: 2026-01-29 09:39:31.303578098 +0000 UTC m=+0.396249560 container died 4803d9610dab09aafd56e311fbf7f76185a8fff6303e2f52b972aba5de1bff1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_edison, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 09:39:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a10737b45df1050aba0cd8e75102cbd15a0738c770eba26b7589ace5b0058c7-merged.mount: Deactivated successfully.
Jan 29 09:39:31 compute-0 podman[251557]: 2026-01-29 09:39:31.342324274 +0000 UTC m=+0.434995656 container remove 4803d9610dab09aafd56e311fbf7f76185a8fff6303e2f52b972aba5de1bff1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:39:31 compute-0 systemd[1]: libpod-conmon-4803d9610dab09aafd56e311fbf7f76185a8fff6303e2f52b972aba5de1bff1d.scope: Deactivated successfully.
Jan 29 09:39:31 compute-0 sudo[251478]: pam_unix(sudo:session): session closed for user root
Jan 29 09:39:31 compute-0 sudo[251595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:39:31 compute-0 sudo[251595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:39:31 compute-0 sudo[251595]: pam_unix(sudo:session): session closed for user root
Jan 29 09:39:31 compute-0 sudo[251620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:39:31 compute-0 sudo[251620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:39:31 compute-0 podman[251657]: 2026-01-29 09:39:31.742447078 +0000 UTC m=+0.047842434 container create 33cb35d5825d3f7546be62cdde1d2141b05d1aeee58c40456e51bea94d2e9504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 09:39:31 compute-0 systemd[1]: Started libpod-conmon-33cb35d5825d3f7546be62cdde1d2141b05d1aeee58c40456e51bea94d2e9504.scope.
Jan 29 09:39:31 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:39:31 compute-0 podman[251657]: 2026-01-29 09:39:31.798853316 +0000 UTC m=+0.104248652 container init 33cb35d5825d3f7546be62cdde1d2141b05d1aeee58c40456e51bea94d2e9504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 29 09:39:31 compute-0 podman[251657]: 2026-01-29 09:39:31.80378826 +0000 UTC m=+0.109183616 container start 33cb35d5825d3f7546be62cdde1d2141b05d1aeee58c40456e51bea94d2e9504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 09:39:31 compute-0 hardcore_chatterjee[251673]: 167 167
Jan 29 09:39:31 compute-0 podman[251657]: 2026-01-29 09:39:31.717397046 +0000 UTC m=+0.022792452 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:39:31 compute-0 systemd[1]: libpod-33cb35d5825d3f7546be62cdde1d2141b05d1aeee58c40456e51bea94d2e9504.scope: Deactivated successfully.
Jan 29 09:39:31 compute-0 podman[251657]: 2026-01-29 09:39:31.810889974 +0000 UTC m=+0.116285300 container attach 33cb35d5825d3f7546be62cdde1d2141b05d1aeee58c40456e51bea94d2e9504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 09:39:31 compute-0 conmon[251673]: conmon 33cb35d5825d3f7546be <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33cb35d5825d3f7546be62cdde1d2141b05d1aeee58c40456e51bea94d2e9504.scope/container/memory.events
Jan 29 09:39:31 compute-0 podman[251657]: 2026-01-29 09:39:31.811896041 +0000 UTC m=+0.117291357 container died 33cb35d5825d3f7546be62cdde1d2141b05d1aeee58c40456e51bea94d2e9504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 29 09:39:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-8526035bd2b4622a30ac70daaa6ff1304cffbddd4d37d4545c59430df61b32a9-merged.mount: Deactivated successfully.
Jan 29 09:39:31 compute-0 podman[251657]: 2026-01-29 09:39:31.851252514 +0000 UTC m=+0.156647860 container remove 33cb35d5825d3f7546be62cdde1d2141b05d1aeee58c40456e51bea94d2e9504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:39:31 compute-0 systemd[1]: libpod-conmon-33cb35d5825d3f7546be62cdde1d2141b05d1aeee58c40456e51bea94d2e9504.scope: Deactivated successfully.
Jan 29 09:39:31 compute-0 podman[251696]: 2026-01-29 09:39:31.98982408 +0000 UTC m=+0.039350013 container create 95579e0fe2076d20327add025e699aa19cc3430ad2a4c4ad2f59566ddd895e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_panini, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 29 09:39:32 compute-0 systemd[1]: Started libpod-conmon-95579e0fe2076d20327add025e699aa19cc3430ad2a4c4ad2f59566ddd895e93.scope.
Jan 29 09:39:32 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:39:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88c1c26ae92b0933923a2da9cc17fd0092a3a495bbcb5e927ab69e104ca98e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:39:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88c1c26ae92b0933923a2da9cc17fd0092a3a495bbcb5e927ab69e104ca98e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:39:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88c1c26ae92b0933923a2da9cc17fd0092a3a495bbcb5e927ab69e104ca98e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:39:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88c1c26ae92b0933923a2da9cc17fd0092a3a495bbcb5e927ab69e104ca98e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:39:32 compute-0 ceph-mon[75183]: pgmap v911: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:32 compute-0 podman[251696]: 2026-01-29 09:39:32.062206703 +0000 UTC m=+0.111732636 container init 95579e0fe2076d20327add025e699aa19cc3430ad2a4c4ad2f59566ddd895e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 29 09:39:32 compute-0 podman[251696]: 2026-01-29 09:39:31.974540213 +0000 UTC m=+0.024066156 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:39:32 compute-0 podman[251696]: 2026-01-29 09:39:32.070144709 +0000 UTC m=+0.119670622 container start 95579e0fe2076d20327add025e699aa19cc3430ad2a4c4ad2f59566ddd895e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_panini, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:39:32 compute-0 podman[251696]: 2026-01-29 09:39:32.073762337 +0000 UTC m=+0.123288260 container attach 95579e0fe2076d20327add025e699aa19cc3430ad2a4c4ad2f59566ddd895e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_panini, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:39:32 compute-0 nova_compute[236255]: 2026-01-29 09:39:32.441 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:39:32 compute-0 lvm[251791]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:39:32 compute-0 lvm[251788]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:39:32 compute-0 lvm[251791]: VG ceph_vg1 finished
Jan 29 09:39:32 compute-0 lvm[251788]: VG ceph_vg0 finished
Jan 29 09:39:32 compute-0 lvm[251793]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:39:32 compute-0 lvm[251793]: VG ceph_vg2 finished
Jan 29 09:39:32 compute-0 focused_panini[251712]: {}
Jan 29 09:39:32 compute-0 systemd[1]: libpod-95579e0fe2076d20327add025e699aa19cc3430ad2a4c4ad2f59566ddd895e93.scope: Deactivated successfully.
Jan 29 09:39:32 compute-0 systemd[1]: libpod-95579e0fe2076d20327add025e699aa19cc3430ad2a4c4ad2f59566ddd895e93.scope: Consumed 1.049s CPU time.
Jan 29 09:39:32 compute-0 podman[251696]: 2026-01-29 09:39:32.833677166 +0000 UTC m=+0.883203089 container died 95579e0fe2076d20327add025e699aa19cc3430ad2a4c4ad2f59566ddd895e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 29 09:39:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-c88c1c26ae92b0933923a2da9cc17fd0092a3a495bbcb5e927ab69e104ca98e3-merged.mount: Deactivated successfully.
Jan 29 09:39:32 compute-0 podman[251696]: 2026-01-29 09:39:32.883487163 +0000 UTC m=+0.933013086 container remove 95579e0fe2076d20327add025e699aa19cc3430ad2a4c4ad2f59566ddd895e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 29 09:39:32 compute-0 systemd[1]: libpod-conmon-95579e0fe2076d20327add025e699aa19cc3430ad2a4c4ad2f59566ddd895e93.scope: Deactivated successfully.
Jan 29 09:39:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v912: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:32 compute-0 sudo[251620]: pam_unix(sudo:session): session closed for user root
Jan 29 09:39:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:39:32 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:39:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:39:32 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:39:33 compute-0 sudo[251807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:39:33 compute-0 sudo[251807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:39:33 compute-0 sudo[251807]: pam_unix(sudo:session): session closed for user root
Jan 29 09:39:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:39:33 compute-0 ceph-mon[75183]: pgmap v912: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:33 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:39:33 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:39:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v913: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:35 compute-0 ceph-mon[75183]: pgmap v913: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v914: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:37 compute-0 ceph-mon[75183]: pgmap v914: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:39:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v915: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:39 compute-0 ceph-mon[75183]: pgmap v915: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v916: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:42 compute-0 ceph-mon[75183]: pgmap v916: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v917: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:39:44 compute-0 ceph-mon[75183]: pgmap v917: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v918: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:46 compute-0 ceph-mon[75183]: pgmap v918: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:46 compute-0 nova_compute[236255]: 2026-01-29 09:39:46.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:39:46 compute-0 nova_compute[236255]: 2026-01-29 09:39:46.556 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 29 09:39:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v919: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:47 compute-0 nova_compute[236255]: 2026-01-29 09:39:47.557 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:39:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:39:48 compute-0 ceph-mon[75183]: pgmap v919: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v920: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:49 compute-0 nova_compute[236255]: 2026-01-29 09:39:49.551 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:39:50 compute-0 ceph-mon[75183]: pgmap v920: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:50 compute-0 nova_compute[236255]: 2026-01-29 09:39:50.532 236262 DEBUG oslo_concurrency.processutils [None req-d2a2a661-1c2e-4f49-a0ea-91a11343445a 4e9e3b7b5e454a2ba0b55b6667d036ce 52f911e1e5d24bb0a12e24a3a459a614 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:39:50 compute-0 nova_compute[236255]: 2026-01-29 09:39:50.547 236262 DEBUG oslo_concurrency.processutils [None req-d2a2a661-1c2e-4f49-a0ea-91a11343445a 4e9e3b7b5e454a2ba0b55b6667d036ce 52f911e1e5d24bb0a12e24a3a459a614 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:39:50 compute-0 nova_compute[236255]: 2026-01-29 09:39:50.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:39:50 compute-0 nova_compute[236255]: 2026-01-29 09:39:50.556 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:39:50 compute-0 nova_compute[236255]: 2026-01-29 09:39:50.556 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:39:50 compute-0 nova_compute[236255]: 2026-01-29 09:39:50.620 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:39:50 compute-0 nova_compute[236255]: 2026-01-29 09:39:50.620 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:39:50 compute-0 nova_compute[236255]: 2026-01-29 09:39:50.621 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:39:50 compute-0 nova_compute[236255]: 2026-01-29 09:39:50.621 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 29 09:39:50 compute-0 nova_compute[236255]: 2026-01-29 09:39:50.622 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:39:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v921: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:39:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3253306126' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:39:51 compute-0 nova_compute[236255]: 2026-01-29 09:39:51.129 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:39:51 compute-0 nova_compute[236255]: 2026-01-29 09:39:51.261 236262 WARNING nova.virt.libvirt.driver [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 29 09:39:51 compute-0 nova_compute[236255]: 2026-01-29 09:39:51.262 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5135MB free_disk=59.98826471157372GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 29 09:39:51 compute-0 nova_compute[236255]: 2026-01-29 09:39:51.262 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:39:51 compute-0 nova_compute[236255]: 2026-01-29 09:39:51.263 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:39:51 compute-0 nova_compute[236255]: 2026-01-29 09:39:51.344 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 29 09:39:51 compute-0 nova_compute[236255]: 2026-01-29 09:39:51.345 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 29 09:39:51 compute-0 nova_compute[236255]: 2026-01-29 09:39:51.359 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:39:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 09:39:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2766081176' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:39:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 09:39:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2766081176' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:39:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:39:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1287270202' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:39:51 compute-0 nova_compute[236255]: 2026-01-29 09:39:51.924 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:39:51 compute-0 nova_compute[236255]: 2026-01-29 09:39:51.930 236262 DEBUG nova.compute.provider_tree [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed in ProviderTree for provider: 2689825d-8fa0-473a-adf1-5005faba9bec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 29 09:39:51 compute-0 nova_compute[236255]: 2026-01-29 09:39:51.950 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed for provider 2689825d-8fa0-473a-adf1-5005faba9bec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 29 09:39:51 compute-0 nova_compute[236255]: 2026-01-29 09:39:51.953 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 29 09:39:51 compute-0 nova_compute[236255]: 2026-01-29 09:39:51.954 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:39:52 compute-0 ceph-mon[75183]: pgmap v921: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3253306126' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:39:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/2766081176' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:39:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/2766081176' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:39:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1287270202' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:39:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v922: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:39:53 compute-0 nova_compute[236255]: 2026-01-29 09:39:53.955 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:39:53 compute-0 nova_compute[236255]: 2026-01-29 09:39:53.955 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 29 09:39:53 compute-0 nova_compute[236255]: 2026-01-29 09:39:53.955 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 29 09:39:54 compute-0 nova_compute[236255]: 2026-01-29 09:39:54.015 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 29 09:39:54 compute-0 nova_compute[236255]: 2026-01-29 09:39:54.015 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:39:54 compute-0 nova_compute[236255]: 2026-01-29 09:39:54.016 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:39:54 compute-0 ceph-mon[75183]: pgmap v922: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v923: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:39:56
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'images', 'volumes', 'vms', 'backups', 'cephfs.cephfs.meta']
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:39:56 compute-0 ceph-mon[75183]: pgmap v923: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:39:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v924: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:57 compute-0 podman[251877]: 2026-01-29 09:39:57.163904795 +0000 UTC m=+0.093002244 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 29 09:39:57 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:39:57.287 152476 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:86:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:f9:50:a2:e1:9f'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 29 09:39:57 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:39:57.288 152476 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 29 09:39:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:39:58 compute-0 ceph-mon[75183]: pgmap v924: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v925: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:39:59 compute-0 podman[251903]: 2026-01-29 09:39:59.127456578 +0000 UTC m=+0.062097393 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:40:00 compute-0 ceph-mon[75183]: pgmap v925: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v926: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:01 compute-0 anacron[30926]: Job `cron.weekly' started
Jan 29 09:40:01 compute-0 anacron[30926]: Job `cron.weekly' terminated
Jan 29 09:40:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:40:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:40:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:40:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:40:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.756942845403104e-07 of space, bias 1.0, pg target 8.270828536209312e-05 quantized to 32 (current 32)
Jan 29 09:40:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:40:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.5000327611377854e-07 of space, bias 1.0, pg target 4.500098283413356e-05 quantized to 32 (current 32)
Jan 29 09:40:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:40:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:40:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:40:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006683123502180493 of space, bias 1.0, pg target 0.2004937050654148 quantized to 32 (current 32)
Jan 29 09:40:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:40:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2051960589356773e-06 of space, bias 4.0, pg target 0.0014462352707228128 quantized to 16 (current 32)
Jan 29 09:40:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:40:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:40:02 compute-0 ceph-mon[75183]: pgmap v926: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v927: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:40:04 compute-0 ceph-mon[75183]: pgmap v927: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v928: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:06 compute-0 ceph-mon[75183]: pgmap v928: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:06 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:40:06.291 152476 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=347a774e-f56f-46e9-8fb5-240ce07d1693, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 29 09:40:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v929: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:40:08 compute-0 ceph-mon[75183]: pgmap v929: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v930: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:40:09.044 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:40:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:40:09.045 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:40:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:40:09.045 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:40:10 compute-0 ceph-mon[75183]: pgmap v930: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v931: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:12 compute-0 ceph-mon[75183]: pgmap v931: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v932: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:40:14 compute-0 ceph-mon[75183]: pgmap v932: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v933: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:16 compute-0 ceph-mon[75183]: pgmap v933: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v934: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:40:18 compute-0 ceph-mon[75183]: pgmap v934: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v935: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:20 compute-0 ceph-mon[75183]: pgmap v935: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v936: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:22 compute-0 ceph-mon[75183]: pgmap v936: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v937: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:40:24 compute-0 ceph-mon[75183]: pgmap v937: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v938: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:26 compute-0 ceph-mon[75183]: pgmap v938: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:40:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:40:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:40:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:40:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:40:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:40:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v939: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:40:28 compute-0 podman[251925]: 2026-01-29 09:40:28.157231963 +0000 UTC m=+0.094886817 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 29 09:40:28 compute-0 ceph-mon[75183]: pgmap v939: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v940: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:30 compute-0 podman[251952]: 2026-01-29 09:40:30.135275218 +0000 UTC m=+0.075597361 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 29 09:40:30 compute-0 ceph-mon[75183]: pgmap v940: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v941: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:32 compute-0 ceph-mon[75183]: pgmap v941: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v942: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:40:33 compute-0 sudo[251971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:40:33 compute-0 sudo[251971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:40:33 compute-0 sudo[251971]: pam_unix(sudo:session): session closed for user root
Jan 29 09:40:33 compute-0 sudo[251996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:40:33 compute-0 sudo[251996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:40:33 compute-0 sudo[251996]: pam_unix(sudo:session): session closed for user root
Jan 29 09:40:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:40:33 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:40:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:40:33 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:40:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:40:33 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:40:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:40:33 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:40:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:40:33 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:40:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:40:33 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:40:33 compute-0 sudo[252053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:40:33 compute-0 sudo[252053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:40:33 compute-0 sudo[252053]: pam_unix(sudo:session): session closed for user root
Jan 29 09:40:33 compute-0 sudo[252078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:40:33 compute-0 sudo[252078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:40:34 compute-0 podman[252114]: 2026-01-29 09:40:34.029257018 +0000 UTC m=+0.053102748 container create 33c2e66e38b90d2a7530955da54fede509dd75f41fc6d725d56dce9b96cc84cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 09:40:34 compute-0 systemd[1]: Started libpod-conmon-33c2e66e38b90d2a7530955da54fede509dd75f41fc6d725d56dce9b96cc84cf.scope.
Jan 29 09:40:34 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:40:34 compute-0 podman[252114]: 2026-01-29 09:40:34.003835345 +0000 UTC m=+0.027681175 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:40:34 compute-0 podman[252114]: 2026-01-29 09:40:34.108779445 +0000 UTC m=+0.132625205 container init 33c2e66e38b90d2a7530955da54fede509dd75f41fc6d725d56dce9b96cc84cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 29 09:40:34 compute-0 podman[252114]: 2026-01-29 09:40:34.115684543 +0000 UTC m=+0.139530273 container start 33c2e66e38b90d2a7530955da54fede509dd75f41fc6d725d56dce9b96cc84cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mendeleev, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 09:40:34 compute-0 podman[252114]: 2026-01-29 09:40:34.119694112 +0000 UTC m=+0.143539842 container attach 33c2e66e38b90d2a7530955da54fede509dd75f41fc6d725d56dce9b96cc84cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mendeleev, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:40:34 compute-0 mystifying_mendeleev[252131]: 167 167
Jan 29 09:40:34 compute-0 systemd[1]: libpod-33c2e66e38b90d2a7530955da54fede509dd75f41fc6d725d56dce9b96cc84cf.scope: Deactivated successfully.
Jan 29 09:40:34 compute-0 conmon[252131]: conmon 33c2e66e38b90d2a7530 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33c2e66e38b90d2a7530955da54fede509dd75f41fc6d725d56dce9b96cc84cf.scope/container/memory.events
Jan 29 09:40:34 compute-0 podman[252114]: 2026-01-29 09:40:34.122942601 +0000 UTC m=+0.146788331 container died 33c2e66e38b90d2a7530955da54fede509dd75f41fc6d725d56dce9b96cc84cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mendeleev, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:40:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-16b5d8a4afa09269ec27f6aa0fdd54a63f73934f89aaff83e486c31752c428f4-merged.mount: Deactivated successfully.
Jan 29 09:40:34 compute-0 podman[252114]: 2026-01-29 09:40:34.161390319 +0000 UTC m=+0.185236049 container remove 33c2e66e38b90d2a7530955da54fede509dd75f41fc6d725d56dce9b96cc84cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mendeleev, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:40:34 compute-0 systemd[1]: libpod-conmon-33c2e66e38b90d2a7530955da54fede509dd75f41fc6d725d56dce9b96cc84cf.scope: Deactivated successfully.
Jan 29 09:40:34 compute-0 podman[252155]: 2026-01-29 09:40:34.297051716 +0000 UTC m=+0.044330429 container create b97ba467fccb18d62761d422d4cf95b051c58e07db287cd642218d069aa794e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hellman, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 29 09:40:34 compute-0 systemd[1]: Started libpod-conmon-b97ba467fccb18d62761d422d4cf95b051c58e07db287cd642218d069aa794e5.scope.
Jan 29 09:40:34 compute-0 ceph-mon[75183]: pgmap v942: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:34 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:40:34 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:40:34 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:40:34 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:40:34 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:40:34 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:40:34 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:40:34 compute-0 podman[252155]: 2026-01-29 09:40:34.275166269 +0000 UTC m=+0.022445002 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc9081864075563619d36b6e23d9b7ae7a983a6bb5f84b7816de85bb95e6b71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc9081864075563619d36b6e23d9b7ae7a983a6bb5f84b7816de85bb95e6b71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc9081864075563619d36b6e23d9b7ae7a983a6bb5f84b7816de85bb95e6b71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc9081864075563619d36b6e23d9b7ae7a983a6bb5f84b7816de85bb95e6b71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc9081864075563619d36b6e23d9b7ae7a983a6bb5f84b7816de85bb95e6b71/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:40:34 compute-0 podman[252155]: 2026-01-29 09:40:34.386419881 +0000 UTC m=+0.133698654 container init b97ba467fccb18d62761d422d4cf95b051c58e07db287cd642218d069aa794e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hellman, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 09:40:34 compute-0 podman[252155]: 2026-01-29 09:40:34.393048202 +0000 UTC m=+0.140326925 container start b97ba467fccb18d62761d422d4cf95b051c58e07db287cd642218d069aa794e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:40:34 compute-0 podman[252155]: 2026-01-29 09:40:34.397064541 +0000 UTC m=+0.144343274 container attach b97ba467fccb18d62761d422d4cf95b051c58e07db287cd642218d069aa794e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hellman, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 09:40:34 compute-0 dazzling_hellman[252171]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:40:34 compute-0 dazzling_hellman[252171]: --> All data devices are unavailable
Jan 29 09:40:34 compute-0 systemd[1]: libpod-b97ba467fccb18d62761d422d4cf95b051c58e07db287cd642218d069aa794e5.scope: Deactivated successfully.
Jan 29 09:40:34 compute-0 podman[252155]: 2026-01-29 09:40:34.824579192 +0000 UTC m=+0.571857895 container died b97ba467fccb18d62761d422d4cf95b051c58e07db287cd642218d069aa794e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hellman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 09:40:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cc9081864075563619d36b6e23d9b7ae7a983a6bb5f84b7816de85bb95e6b71-merged.mount: Deactivated successfully.
Jan 29 09:40:34 compute-0 podman[252155]: 2026-01-29 09:40:34.860894502 +0000 UTC m=+0.608173235 container remove b97ba467fccb18d62761d422d4cf95b051c58e07db287cd642218d069aa794e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:40:34 compute-0 systemd[1]: libpod-conmon-b97ba467fccb18d62761d422d4cf95b051c58e07db287cd642218d069aa794e5.scope: Deactivated successfully.
Jan 29 09:40:34 compute-0 sudo[252078]: pam_unix(sudo:session): session closed for user root
Jan 29 09:40:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v943: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:34 compute-0 sudo[252203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:40:34 compute-0 sudo[252203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:40:34 compute-0 sudo[252203]: pam_unix(sudo:session): session closed for user root
Jan 29 09:40:35 compute-0 sudo[252228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:40:35 compute-0 sudo[252228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:40:35 compute-0 podman[252265]: 2026-01-29 09:40:35.366934873 +0000 UTC m=+0.043298162 container create 7dd938a7e4082ace1d7f7a14d28164055d2802ba79c72aa421b68ce93573114e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 29 09:40:35 compute-0 systemd[1]: Started libpod-conmon-7dd938a7e4082ace1d7f7a14d28164055d2802ba79c72aa421b68ce93573114e.scope.
Jan 29 09:40:35 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:40:35 compute-0 podman[252265]: 2026-01-29 09:40:35.437372322 +0000 UTC m=+0.113735621 container init 7dd938a7e4082ace1d7f7a14d28164055d2802ba79c72aa421b68ce93573114e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:40:35 compute-0 podman[252265]: 2026-01-29 09:40:35.444499777 +0000 UTC m=+0.120863056 container start 7dd938a7e4082ace1d7f7a14d28164055d2802ba79c72aa421b68ce93573114e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_herschel, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:40:35 compute-0 podman[252265]: 2026-01-29 09:40:35.350637608 +0000 UTC m=+0.027000917 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:40:35 compute-0 modest_herschel[252281]: 167 167
Jan 29 09:40:35 compute-0 podman[252265]: 2026-01-29 09:40:35.450249883 +0000 UTC m=+0.126613192 container attach 7dd938a7e4082ace1d7f7a14d28164055d2802ba79c72aa421b68ce93573114e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 29 09:40:35 compute-0 systemd[1]: libpod-7dd938a7e4082ace1d7f7a14d28164055d2802ba79c72aa421b68ce93573114e.scope: Deactivated successfully.
Jan 29 09:40:35 compute-0 conmon[252281]: conmon 7dd938a7e4082ace1d7f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7dd938a7e4082ace1d7f7a14d28164055d2802ba79c72aa421b68ce93573114e.scope/container/memory.events
Jan 29 09:40:35 compute-0 podman[252265]: 2026-01-29 09:40:35.452539346 +0000 UTC m=+0.128902625 container died 7dd938a7e4082ace1d7f7a14d28164055d2802ba79c72aa421b68ce93573114e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_herschel, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 09:40:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a148659aa2d91838cb42c5fd1ca8416c87ea447d10ddedc7155bddb60642fa2-merged.mount: Deactivated successfully.
Jan 29 09:40:35 compute-0 podman[252265]: 2026-01-29 09:40:35.49048227 +0000 UTC m=+0.166845549 container remove 7dd938a7e4082ace1d7f7a14d28164055d2802ba79c72aa421b68ce93573114e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_herschel, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 09:40:35 compute-0 systemd[1]: libpod-conmon-7dd938a7e4082ace1d7f7a14d28164055d2802ba79c72aa421b68ce93573114e.scope: Deactivated successfully.
Jan 29 09:40:35 compute-0 podman[252303]: 2026-01-29 09:40:35.615033814 +0000 UTC m=+0.044882364 container create d0550c88f9dbac6682fbe328f7b64fdd7b8b7b59ea197639bc18905093e4c893 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_tesla, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 09:40:35 compute-0 systemd[1]: Started libpod-conmon-d0550c88f9dbac6682fbe328f7b64fdd7b8b7b59ea197639bc18905093e4c893.scope.
Jan 29 09:40:35 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:40:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e100e488f3f53d6479196c7fe5ecbcbcb169b1820e3cb230a10dae9a7a7d548/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:40:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e100e488f3f53d6479196c7fe5ecbcbcb169b1820e3cb230a10dae9a7a7d548/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:40:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e100e488f3f53d6479196c7fe5ecbcbcb169b1820e3cb230a10dae9a7a7d548/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:40:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e100e488f3f53d6479196c7fe5ecbcbcb169b1820e3cb230a10dae9a7a7d548/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:40:35 compute-0 podman[252303]: 2026-01-29 09:40:35.596258972 +0000 UTC m=+0.026107612 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:40:35 compute-0 podman[252303]: 2026-01-29 09:40:35.701691115 +0000 UTC m=+0.131539695 container init d0550c88f9dbac6682fbe328f7b64fdd7b8b7b59ea197639bc18905093e4c893 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_tesla, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:40:35 compute-0 podman[252303]: 2026-01-29 09:40:35.708688376 +0000 UTC m=+0.138536926 container start d0550c88f9dbac6682fbe328f7b64fdd7b8b7b59ea197639bc18905093e4c893 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 09:40:35 compute-0 podman[252303]: 2026-01-29 09:40:35.711794281 +0000 UTC m=+0.141642831 container attach d0550c88f9dbac6682fbe328f7b64fdd7b8b7b59ea197639bc18905093e4c893 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:40:35 compute-0 admiring_tesla[252320]: {
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:     "0": [
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:         {
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "devices": [
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "/dev/loop3"
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             ],
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "lv_name": "ceph_lv0",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "lv_size": "21470642176",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "name": "ceph_lv0",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "tags": {
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.cluster_name": "ceph",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.crush_device_class": "",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.encrypted": "0",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.objectstore": "bluestore",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.osd_id": "0",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.type": "block",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.vdo": "0",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.with_tpm": "0"
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             },
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "type": "block",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "vg_name": "ceph_vg0"
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:         }
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:     ],
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:     "1": [
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:         {
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "devices": [
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "/dev/loop4"
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             ],
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "lv_name": "ceph_lv1",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "lv_size": "21470642176",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "name": "ceph_lv1",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "tags": {
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.cluster_name": "ceph",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.crush_device_class": "",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.encrypted": "0",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.objectstore": "bluestore",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.osd_id": "1",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.type": "block",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.vdo": "0",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.with_tpm": "0"
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             },
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "type": "block",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "vg_name": "ceph_vg1"
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:         }
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:     ],
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:     "2": [
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:         {
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "devices": [
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "/dev/loop5"
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             ],
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "lv_name": "ceph_lv2",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "lv_size": "21470642176",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "name": "ceph_lv2",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "tags": {
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.cluster_name": "ceph",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.crush_device_class": "",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.encrypted": "0",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.objectstore": "bluestore",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.osd_id": "2",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.type": "block",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.vdo": "0",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:                 "ceph.with_tpm": "0"
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             },
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "type": "block",
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:             "vg_name": "ceph_vg2"
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:         }
Jan 29 09:40:35 compute-0 admiring_tesla[252320]:     ]
Jan 29 09:40:35 compute-0 admiring_tesla[252320]: }
Jan 29 09:40:35 compute-0 systemd[1]: libpod-d0550c88f9dbac6682fbe328f7b64fdd7b8b7b59ea197639bc18905093e4c893.scope: Deactivated successfully.
Jan 29 09:40:35 compute-0 podman[252303]: 2026-01-29 09:40:35.980105242 +0000 UTC m=+0.409953792 container died d0550c88f9dbac6682fbe328f7b64fdd7b8b7b59ea197639bc18905093e4c893 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 29 09:40:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e100e488f3f53d6479196c7fe5ecbcbcb169b1820e3cb230a10dae9a7a7d548-merged.mount: Deactivated successfully.
Jan 29 09:40:36 compute-0 podman[252303]: 2026-01-29 09:40:36.010046039 +0000 UTC m=+0.439894589 container remove d0550c88f9dbac6682fbe328f7b64fdd7b8b7b59ea197639bc18905093e4c893 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_tesla, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:40:36 compute-0 systemd[1]: libpod-conmon-d0550c88f9dbac6682fbe328f7b64fdd7b8b7b59ea197639bc18905093e4c893.scope: Deactivated successfully.
Jan 29 09:40:36 compute-0 sudo[252228]: pam_unix(sudo:session): session closed for user root
Jan 29 09:40:36 compute-0 sudo[252341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:40:36 compute-0 sudo[252341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:40:36 compute-0 sudo[252341]: pam_unix(sudo:session): session closed for user root
Jan 29 09:40:36 compute-0 sudo[252366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:40:36 compute-0 sudo[252366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:40:36 compute-0 ceph-mon[75183]: pgmap v943: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:36 compute-0 podman[252401]: 2026-01-29 09:40:36.390166807 +0000 UTC m=+0.048003959 container create 28630d2f27e9f9f65fce6056e451ee1c73cac21fbf8945169c240b8fc594f7dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_maxwell, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:40:36 compute-0 systemd[1]: Started libpod-conmon-28630d2f27e9f9f65fce6056e451ee1c73cac21fbf8945169c240b8fc594f7dc.scope.
Jan 29 09:40:36 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:40:36 compute-0 podman[252401]: 2026-01-29 09:40:36.459666671 +0000 UTC m=+0.117503833 container init 28630d2f27e9f9f65fce6056e451ee1c73cac21fbf8945169c240b8fc594f7dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_maxwell, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 29 09:40:36 compute-0 podman[252401]: 2026-01-29 09:40:36.365827894 +0000 UTC m=+0.023665126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:40:36 compute-0 podman[252401]: 2026-01-29 09:40:36.464306998 +0000 UTC m=+0.122144140 container start 28630d2f27e9f9f65fce6056e451ee1c73cac21fbf8945169c240b8fc594f7dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:40:36 compute-0 podman[252401]: 2026-01-29 09:40:36.468249075 +0000 UTC m=+0.126086217 container attach 28630d2f27e9f9f65fce6056e451ee1c73cac21fbf8945169c240b8fc594f7dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:40:36 compute-0 nervous_maxwell[252417]: 167 167
Jan 29 09:40:36 compute-0 systemd[1]: libpod-28630d2f27e9f9f65fce6056e451ee1c73cac21fbf8945169c240b8fc594f7dc.scope: Deactivated successfully.
Jan 29 09:40:36 compute-0 podman[252401]: 2026-01-29 09:40:36.472179502 +0000 UTC m=+0.130016654 container died 28630d2f27e9f9f65fce6056e451ee1c73cac21fbf8945169c240b8fc594f7dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:40:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f86d3c30d74a266ad72002dc6fb4be85ad75ffe775d8a0de332a54252510454-merged.mount: Deactivated successfully.
Jan 29 09:40:36 compute-0 podman[252401]: 2026-01-29 09:40:36.508844771 +0000 UTC m=+0.166681913 container remove 28630d2f27e9f9f65fce6056e451ee1c73cac21fbf8945169c240b8fc594f7dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:40:36 compute-0 systemd[1]: libpod-conmon-28630d2f27e9f9f65fce6056e451ee1c73cac21fbf8945169c240b8fc594f7dc.scope: Deactivated successfully.
Jan 29 09:40:36 compute-0 podman[252442]: 2026-01-29 09:40:36.637161977 +0000 UTC m=+0.041432990 container create 7c4f7bd7b462b2f6d46a0c6ea1785ebcba566b28a66bdfa737fa7c26a2eed439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:40:36 compute-0 systemd[1]: Started libpod-conmon-7c4f7bd7b462b2f6d46a0c6ea1785ebcba566b28a66bdfa737fa7c26a2eed439.scope.
Jan 29 09:40:36 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:40:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ba1d449849e8bc14c1b71bd87728396d1d6ca3086a54a6c68844bf6d72dcbeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:40:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ba1d449849e8bc14c1b71bd87728396d1d6ca3086a54a6c68844bf6d72dcbeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:40:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ba1d449849e8bc14c1b71bd87728396d1d6ca3086a54a6c68844bf6d72dcbeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:40:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ba1d449849e8bc14c1b71bd87728396d1d6ca3086a54a6c68844bf6d72dcbeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:40:36 compute-0 podman[252442]: 2026-01-29 09:40:36.710269759 +0000 UTC m=+0.114540782 container init 7c4f7bd7b462b2f6d46a0c6ea1785ebcba566b28a66bdfa737fa7c26a2eed439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 29 09:40:36 compute-0 podman[252442]: 2026-01-29 09:40:36.61526611 +0000 UTC m=+0.019537113 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:40:36 compute-0 podman[252442]: 2026-01-29 09:40:36.71690565 +0000 UTC m=+0.121176653 container start 7c4f7bd7b462b2f6d46a0c6ea1785ebcba566b28a66bdfa737fa7c26a2eed439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_margulis, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:40:36 compute-0 podman[252442]: 2026-01-29 09:40:36.72019395 +0000 UTC m=+0.124464973 container attach 7c4f7bd7b462b2f6d46a0c6ea1785ebcba566b28a66bdfa737fa7c26a2eed439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 09:40:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v944: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:37 compute-0 lvm[252537]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:40:37 compute-0 lvm[252536]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:40:37 compute-0 lvm[252536]: VG ceph_vg0 finished
Jan 29 09:40:37 compute-0 lvm[252537]: VG ceph_vg1 finished
Jan 29 09:40:37 compute-0 lvm[252539]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:40:37 compute-0 lvm[252539]: VG ceph_vg2 finished
Jan 29 09:40:37 compute-0 priceless_margulis[252458]: {}
Jan 29 09:40:37 compute-0 systemd[1]: libpod-7c4f7bd7b462b2f6d46a0c6ea1785ebcba566b28a66bdfa737fa7c26a2eed439.scope: Deactivated successfully.
Jan 29 09:40:37 compute-0 systemd[1]: libpod-7c4f7bd7b462b2f6d46a0c6ea1785ebcba566b28a66bdfa737fa7c26a2eed439.scope: Consumed 1.274s CPU time.
Jan 29 09:40:37 compute-0 podman[252442]: 2026-01-29 09:40:37.598052263 +0000 UTC m=+1.002323266 container died 7c4f7bd7b462b2f6d46a0c6ea1785ebcba566b28a66bdfa737fa7c26a2eed439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 09:40:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ba1d449849e8bc14c1b71bd87728396d1d6ca3086a54a6c68844bf6d72dcbeb-merged.mount: Deactivated successfully.
Jan 29 09:40:37 compute-0 podman[252442]: 2026-01-29 09:40:37.644086208 +0000 UTC m=+1.048357211 container remove 7c4f7bd7b462b2f6d46a0c6ea1785ebcba566b28a66bdfa737fa7c26a2eed439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 29 09:40:37 compute-0 systemd[1]: libpod-conmon-7c4f7bd7b462b2f6d46a0c6ea1785ebcba566b28a66bdfa737fa7c26a2eed439.scope: Deactivated successfully.
Jan 29 09:40:37 compute-0 sudo[252366]: pam_unix(sudo:session): session closed for user root
Jan 29 09:40:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:40:37 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:40:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:40:37 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:40:37 compute-0 sudo[252555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:40:37 compute-0 sudo[252555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:40:37 compute-0 sudo[252555]: pam_unix(sudo:session): session closed for user root
Jan 29 09:40:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:40:38 compute-0 ceph-mon[75183]: pgmap v944: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:38 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:40:38 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:40:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v945: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:40 compute-0 ceph-mon[75183]: pgmap v945: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v946: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:41 compute-0 ceph-mon[75183]: pgmap v946: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v947: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:40:44 compute-0 ceph-mon[75183]: pgmap v947: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v948: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:46 compute-0 ceph-mon[75183]: pgmap v948: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v949: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:47 compute-0 nova_compute[236255]: 2026-01-29 09:40:47.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:40:47 compute-0 nova_compute[236255]: 2026-01-29 09:40:47.556 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:40:47 compute-0 nova_compute[236255]: 2026-01-29 09:40:47.556 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 29 09:40:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:40:48 compute-0 ceph-mon[75183]: pgmap v949: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v950: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:49 compute-0 nova_compute[236255]: 2026-01-29 09:40:49.552 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:40:50 compute-0 ceph-mon[75183]: pgmap v950: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:50 compute-0 nova_compute[236255]: 2026-01-29 09:40:50.550 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:40:50 compute-0 nova_compute[236255]: 2026-01-29 09:40:50.566 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:40:50 compute-0 nova_compute[236255]: 2026-01-29 09:40:50.567 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:40:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v951: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 09:40:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2505862342' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:40:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 09:40:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2505862342' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:40:51 compute-0 nova_compute[236255]: 2026-01-29 09:40:51.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:40:51 compute-0 nova_compute[236255]: 2026-01-29 09:40:51.586 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:40:51 compute-0 nova_compute[236255]: 2026-01-29 09:40:51.586 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:40:51 compute-0 nova_compute[236255]: 2026-01-29 09:40:51.586 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:40:51 compute-0 nova_compute[236255]: 2026-01-29 09:40:51.586 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 29 09:40:51 compute-0 nova_compute[236255]: 2026-01-29 09:40:51.586 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:40:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:40:52 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4092852147' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:40:52 compute-0 nova_compute[236255]: 2026-01-29 09:40:52.146 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:40:52 compute-0 ceph-mon[75183]: pgmap v951: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/2505862342' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:40:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/2505862342' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:40:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/4092852147' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:40:52 compute-0 nova_compute[236255]: 2026-01-29 09:40:52.315 236262 WARNING nova.virt.libvirt.driver [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 29 09:40:52 compute-0 nova_compute[236255]: 2026-01-29 09:40:52.317 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5138MB free_disk=59.98826471157372GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 29 09:40:52 compute-0 nova_compute[236255]: 2026-01-29 09:40:52.317 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:40:52 compute-0 nova_compute[236255]: 2026-01-29 09:40:52.318 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:40:52 compute-0 nova_compute[236255]: 2026-01-29 09:40:52.406 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 29 09:40:52 compute-0 nova_compute[236255]: 2026-01-29 09:40:52.407 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 29 09:40:52 compute-0 nova_compute[236255]: 2026-01-29 09:40:52.431 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:40:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v952: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:40:53 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1381601621' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:40:53 compute-0 nova_compute[236255]: 2026-01-29 09:40:53.020 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:40:53 compute-0 nova_compute[236255]: 2026-01-29 09:40:53.026 236262 DEBUG nova.compute.provider_tree [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed in ProviderTree for provider: 2689825d-8fa0-473a-adf1-5005faba9bec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 29 09:40:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:40:53 compute-0 nova_compute[236255]: 2026-01-29 09:40:53.053 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed for provider 2689825d-8fa0-473a-adf1-5005faba9bec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 29 09:40:53 compute-0 nova_compute[236255]: 2026-01-29 09:40:53.054 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 29 09:40:53 compute-0 nova_compute[236255]: 2026-01-29 09:40:53.055 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:40:53 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1381601621' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:40:54 compute-0 ceph-mon[75183]: pgmap v952: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v953: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:55 compute-0 nova_compute[236255]: 2026-01-29 09:40:55.056 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:40:55 compute-0 nova_compute[236255]: 2026-01-29 09:40:55.056 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 29 09:40:55 compute-0 nova_compute[236255]: 2026-01-29 09:40:55.057 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 29 09:40:55 compute-0 nova_compute[236255]: 2026-01-29 09:40:55.073 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 29 09:40:55 compute-0 nova_compute[236255]: 2026-01-29 09:40:55.074 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:40:55 compute-0 nova_compute[236255]: 2026-01-29 09:40:55.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:40:56
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['volumes', '.mgr', 'cephfs.cephfs.data', 'vms', 'images', 'backups', 'cephfs.cephfs.meta']
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:40:56 compute-0 ceph-mon[75183]: pgmap v953: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:40:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v954: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:40:58 compute-0 ceph-mon[75183]: pgmap v954: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v955: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:40:59 compute-0 podman[252624]: 2026-01-29 09:40:59.183434491 +0000 UTC m=+0.126549950 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 29 09:41:00 compute-0 ceph-mon[75183]: pgmap v955: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v956: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:01 compute-0 podman[252650]: 2026-01-29 09:41:01.100841644 +0000 UTC m=+0.046988581 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 29 09:41:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:41:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:41:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:41:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:41:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.756942845403104e-07 of space, bias 1.0, pg target 8.270828536209312e-05 quantized to 32 (current 32)
Jan 29 09:41:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:41:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.5000327611377854e-07 of space, bias 1.0, pg target 4.500098283413356e-05 quantized to 32 (current 32)
Jan 29 09:41:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:41:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:41:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:41:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006683123502180493 of space, bias 1.0, pg target 0.2004937050654148 quantized to 32 (current 32)
Jan 29 09:41:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:41:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2051960589356773e-06 of space, bias 4.0, pg target 0.0014462352707228128 quantized to 16 (current 32)
Jan 29 09:41:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:41:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:41:02 compute-0 ceph-mon[75183]: pgmap v956: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v957: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:41:04 compute-0 ceph-mon[75183]: pgmap v957: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v958: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:06 compute-0 ceph-mon[75183]: pgmap v958: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v959: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:41:08 compute-0 ceph-mon[75183]: pgmap v959: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v960: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:41:09.046 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:41:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:41:09.047 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:41:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:41:09.047 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:41:10 compute-0 ceph-mon[75183]: pgmap v960: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v961: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:12 compute-0 ceph-mon[75183]: pgmap v961: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v962: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:41:14 compute-0 ceph-mon[75183]: pgmap v962: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v963: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:16 compute-0 ceph-mon[75183]: pgmap v963: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v964: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:41:18 compute-0 ceph-mon[75183]: pgmap v964: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v965: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:20 compute-0 ceph-mon[75183]: pgmap v965: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v966: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:22 compute-0 ceph-mon[75183]: pgmap v966: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v967: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:41:24 compute-0 ceph-mon[75183]: pgmap v967: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v968: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:26 compute-0 ceph-mon[75183]: pgmap v968: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:41:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:41:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:41:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:41:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:41:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:41:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v969: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:41:28 compute-0 ceph-mon[75183]: pgmap v969: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v970: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:30 compute-0 podman[252671]: 2026-01-29 09:41:30.170395474 +0000 UTC m=+0.112675842 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 29 09:41:30 compute-0 ceph-mon[75183]: pgmap v970: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v971: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:32 compute-0 podman[252697]: 2026-01-29 09:41:32.0978075 +0000 UTC m=+0.042775487 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 09:41:32 compute-0 ceph-mon[75183]: pgmap v971: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v972: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:41:34 compute-0 ceph-mon[75183]: pgmap v972: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v973: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:36 compute-0 ceph-mon[75183]: pgmap v973: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v974: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:37 compute-0 sudo[252717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:41:37 compute-0 sudo[252717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:41:37 compute-0 sudo[252717]: pam_unix(sudo:session): session closed for user root
Jan 29 09:41:37 compute-0 sudo[252742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:41:37 compute-0 sudo[252742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:41:38 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:41:38 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4550 writes, 20K keys, 4550 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4550 writes, 4550 syncs, 1.00 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1485 writes, 6750 keys, 1485 commit groups, 1.0 writes per commit group, ingest: 6.49 MB, 0.01 MB/s
                                           Interval WAL: 1485 writes, 1485 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    105.0      0.15              0.05        11    0.014       0      0       0.0       0.0
                                             L6      1/0    5.26 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   3.2    163.2    134.9      0.37              0.14        10    0.037     38K   5291       0.0       0.0
                                            Sum      1/0    5.26 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   4.2    116.2    126.3      0.52              0.19        21    0.025     38K   5291       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.5    127.1    130.0      0.24              0.10        10    0.024     21K   3017       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    163.2    134.9      0.37              0.14        10    0.037     38K   5291       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    107.9      0.15              0.05        10    0.015       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.015, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.06 GB write, 0.04 MB/s write, 0.06 GB read, 0.03 MB/s read, 0.5 seconds
                                           Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55621d63f8d0#2 capacity: 308.00 MB usage: 5.54 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000116 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(541,5.22 MB,1.69488%) FilterBlock(22,114.36 KB,0.0362594%) IndexBlock(22,210.98 KB,0.0668959%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 29 09:41:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:41:38 compute-0 sudo[252742]: pam_unix(sudo:session): session closed for user root
Jan 29 09:41:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:41:38 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:41:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:41:38 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:41:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:41:38 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:41:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:41:38 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:41:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:41:38 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:41:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:41:38 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:41:38 compute-0 sudo[252799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:41:38 compute-0 sudo[252799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:41:38 compute-0 sudo[252799]: pam_unix(sudo:session): session closed for user root
Jan 29 09:41:38 compute-0 ceph-mon[75183]: pgmap v974: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:38 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:41:38 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:41:38 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:41:38 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:41:38 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:41:38 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:41:38 compute-0 sudo[252824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:41:38 compute-0 sudo[252824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:41:38 compute-0 podman[252861]: 2026-01-29 09:41:38.838764935 +0000 UTC m=+0.061568569 container create c34c14f381d03fb533f4bd526d23b0b722d8dc7451beaa361435ad21608579cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_pare, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 09:41:38 compute-0 systemd[1]: Started libpod-conmon-c34c14f381d03fb533f4bd526d23b0b722d8dc7451beaa361435ad21608579cf.scope.
Jan 29 09:41:38 compute-0 podman[252861]: 2026-01-29 09:41:38.80079598 +0000 UTC m=+0.023599714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:41:38 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:41:38 compute-0 podman[252861]: 2026-01-29 09:41:38.923126294 +0000 UTC m=+0.145929958 container init c34c14f381d03fb533f4bd526d23b0b722d8dc7451beaa361435ad21608579cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_pare, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 29 09:41:38 compute-0 podman[252861]: 2026-01-29 09:41:38.933915518 +0000 UTC m=+0.156719182 container start c34c14f381d03fb533f4bd526d23b0b722d8dc7451beaa361435ad21608579cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_pare, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:41:38 compute-0 fervent_pare[252878]: 167 167
Jan 29 09:41:38 compute-0 systemd[1]: libpod-c34c14f381d03fb533f4bd526d23b0b722d8dc7451beaa361435ad21608579cf.scope: Deactivated successfully.
Jan 29 09:41:38 compute-0 podman[252861]: 2026-01-29 09:41:38.946574903 +0000 UTC m=+0.169378577 container attach c34c14f381d03fb533f4bd526d23b0b722d8dc7451beaa361435ad21608579cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_pare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:41:38 compute-0 podman[252861]: 2026-01-29 09:41:38.947219051 +0000 UTC m=+0.170022725 container died c34c14f381d03fb533f4bd526d23b0b722d8dc7451beaa361435ad21608579cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_pare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 29 09:41:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v975: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea6267807cc9e9e197a58e3757feae52faaf217c36c11755fe9bc17162752e5a-merged.mount: Deactivated successfully.
Jan 29 09:41:39 compute-0 podman[252861]: 2026-01-29 09:41:39.070436739 +0000 UTC m=+0.293240373 container remove c34c14f381d03fb533f4bd526d23b0b722d8dc7451beaa361435ad21608579cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_pare, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:41:39 compute-0 systemd[1]: libpod-conmon-c34c14f381d03fb533f4bd526d23b0b722d8dc7451beaa361435ad21608579cf.scope: Deactivated successfully.
Jan 29 09:41:39 compute-0 podman[252905]: 2026-01-29 09:41:39.207261138 +0000 UTC m=+0.037627917 container create 9fe9b51b08c5d79c544d5885230a8c2d5bab2346b8e12ad14b21a0dcfc35549c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 09:41:39 compute-0 systemd[1]: Started libpod-conmon-9fe9b51b08c5d79c544d5885230a8c2d5bab2346b8e12ad14b21a0dcfc35549c.scope.
Jan 29 09:41:39 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:41:39 compute-0 podman[252905]: 2026-01-29 09:41:39.189905465 +0000 UTC m=+0.020272254 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/074e8f1454236e0e20c1c88cf94f67d9626d4b6c99778192644e4bcd61e6ed3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/074e8f1454236e0e20c1c88cf94f67d9626d4b6c99778192644e4bcd61e6ed3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/074e8f1454236e0e20c1c88cf94f67d9626d4b6c99778192644e4bcd61e6ed3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/074e8f1454236e0e20c1c88cf94f67d9626d4b6c99778192644e4bcd61e6ed3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/074e8f1454236e0e20c1c88cf94f67d9626d4b6c99778192644e4bcd61e6ed3d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:41:39 compute-0 podman[252905]: 2026-01-29 09:41:39.303244293 +0000 UTC m=+0.133611062 container init 9fe9b51b08c5d79c544d5885230a8c2d5bab2346b8e12ad14b21a0dcfc35549c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 09:41:39 compute-0 podman[252905]: 2026-01-29 09:41:39.315127917 +0000 UTC m=+0.145494656 container start 9fe9b51b08c5d79c544d5885230a8c2d5bab2346b8e12ad14b21a0dcfc35549c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 29 09:41:39 compute-0 podman[252905]: 2026-01-29 09:41:39.319725192 +0000 UTC m=+0.150091951 container attach 9fe9b51b08c5d79c544d5885230a8c2d5bab2346b8e12ad14b21a0dcfc35549c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_dijkstra, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 29 09:41:39 compute-0 upbeat_dijkstra[252922]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:41:39 compute-0 upbeat_dijkstra[252922]: --> All data devices are unavailable
Jan 29 09:41:39 compute-0 systemd[1]: libpod-9fe9b51b08c5d79c544d5885230a8c2d5bab2346b8e12ad14b21a0dcfc35549c.scope: Deactivated successfully.
Jan 29 09:41:39 compute-0 podman[252905]: 2026-01-29 09:41:39.776554402 +0000 UTC m=+0.606921241 container died 9fe9b51b08c5d79c544d5885230a8c2d5bab2346b8e12ad14b21a0dcfc35549c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:41:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-074e8f1454236e0e20c1c88cf94f67d9626d4b6c99778192644e4bcd61e6ed3d-merged.mount: Deactivated successfully.
Jan 29 09:41:39 compute-0 podman[252905]: 2026-01-29 09:41:39.950460132 +0000 UTC m=+0.780826881 container remove 9fe9b51b08c5d79c544d5885230a8c2d5bab2346b8e12ad14b21a0dcfc35549c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_dijkstra, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 29 09:41:39 compute-0 sudo[252824]: pam_unix(sudo:session): session closed for user root
Jan 29 09:41:39 compute-0 systemd[1]: libpod-conmon-9fe9b51b08c5d79c544d5885230a8c2d5bab2346b8e12ad14b21a0dcfc35549c.scope: Deactivated successfully.
Jan 29 09:41:40 compute-0 sudo[252954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:41:40 compute-0 sudo[252954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:41:40 compute-0 sudo[252954]: pam_unix(sudo:session): session closed for user root
Jan 29 09:41:40 compute-0 sudo[252979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:41:40 compute-0 sudo[252979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:41:40 compute-0 podman[253016]: 2026-01-29 09:41:40.294461446 +0000 UTC m=+0.016313145 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:41:40 compute-0 podman[253016]: 2026-01-29 09:41:40.447400024 +0000 UTC m=+0.169251733 container create e385ee60daf2746c6c3ff9fd8a03f41ed0ec0cbc6a49eac8e0e4959ad4050c31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_noyce, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 09:41:40 compute-0 systemd[1]: Started libpod-conmon-e385ee60daf2746c6c3ff9fd8a03f41ed0ec0cbc6a49eac8e0e4959ad4050c31.scope.
Jan 29 09:41:40 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:41:40 compute-0 ceph-mon[75183]: pgmap v975: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:40 compute-0 podman[253016]: 2026-01-29 09:41:40.747284707 +0000 UTC m=+0.469136426 container init e385ee60daf2746c6c3ff9fd8a03f41ed0ec0cbc6a49eac8e0e4959ad4050c31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:41:40 compute-0 podman[253016]: 2026-01-29 09:41:40.756226751 +0000 UTC m=+0.478078470 container start e385ee60daf2746c6c3ff9fd8a03f41ed0ec0cbc6a49eac8e0e4959ad4050c31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_noyce, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 09:41:40 compute-0 eloquent_noyce[253032]: 167 167
Jan 29 09:41:40 compute-0 systemd[1]: libpod-e385ee60daf2746c6c3ff9fd8a03f41ed0ec0cbc6a49eac8e0e4959ad4050c31.scope: Deactivated successfully.
Jan 29 09:41:40 compute-0 podman[253016]: 2026-01-29 09:41:40.774726985 +0000 UTC m=+0.496578784 container attach e385ee60daf2746c6c3ff9fd8a03f41ed0ec0cbc6a49eac8e0e4959ad4050c31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_noyce, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 09:41:40 compute-0 podman[253016]: 2026-01-29 09:41:40.776301748 +0000 UTC m=+0.498153467 container died e385ee60daf2746c6c3ff9fd8a03f41ed0ec0cbc6a49eac8e0e4959ad4050c31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_noyce, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:41:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c4a68ed97d8167512277067581a660241bcb8d3859c21cb5ee3a839ed24469b-merged.mount: Deactivated successfully.
Jan 29 09:41:40 compute-0 podman[253016]: 2026-01-29 09:41:40.910354151 +0000 UTC m=+0.632205840 container remove e385ee60daf2746c6c3ff9fd8a03f41ed0ec0cbc6a49eac8e0e4959ad4050c31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 09:41:40 compute-0 systemd[1]: libpod-conmon-e385ee60daf2746c6c3ff9fd8a03f41ed0ec0cbc6a49eac8e0e4959ad4050c31.scope: Deactivated successfully.
Jan 29 09:41:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v976: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:41 compute-0 podman[253059]: 2026-01-29 09:41:41.081662139 +0000 UTC m=+0.067399408 container create 9dab32ada9b3190c34a0892da1d41061f8451597894b3a069d41a9723c4e1e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_sammet, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 29 09:41:41 compute-0 podman[253059]: 2026-01-29 09:41:41.042242824 +0000 UTC m=+0.027980143 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:41:41 compute-0 systemd[1]: Started libpod-conmon-9dab32ada9b3190c34a0892da1d41061f8451597894b3a069d41a9723c4e1e8e.scope.
Jan 29 09:41:41 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae7de8b92653f9546ea03bdf5728eb5a77073b248bbcee3c8b4c4ebb48bc3355/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae7de8b92653f9546ea03bdf5728eb5a77073b248bbcee3c8b4c4ebb48bc3355/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae7de8b92653f9546ea03bdf5728eb5a77073b248bbcee3c8b4c4ebb48bc3355/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae7de8b92653f9546ea03bdf5728eb5a77073b248bbcee3c8b4c4ebb48bc3355/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:41:41 compute-0 podman[253059]: 2026-01-29 09:41:41.207904089 +0000 UTC m=+0.193641378 container init 9dab32ada9b3190c34a0892da1d41061f8451597894b3a069d41a9723c4e1e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 09:41:41 compute-0 podman[253059]: 2026-01-29 09:41:41.213735098 +0000 UTC m=+0.199472367 container start 9dab32ada9b3190c34a0892da1d41061f8451597894b3a069d41a9723c4e1e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_sammet, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:41:41 compute-0 podman[253059]: 2026-01-29 09:41:41.218120837 +0000 UTC m=+0.203858156 container attach 9dab32ada9b3190c34a0892da1d41061f8451597894b3a069d41a9723c4e1e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 29 09:41:41 compute-0 fervent_sammet[253076]: {
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:     "0": [
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:         {
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "devices": [
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "/dev/loop3"
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             ],
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "lv_name": "ceph_lv0",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "lv_size": "21470642176",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "name": "ceph_lv0",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "tags": {
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.cluster_name": "ceph",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.crush_device_class": "",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.encrypted": "0",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.objectstore": "bluestore",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.osd_id": "0",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.type": "block",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.vdo": "0",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.with_tpm": "0"
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             },
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "type": "block",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "vg_name": "ceph_vg0"
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:         }
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:     ],
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:     "1": [
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:         {
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "devices": [
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "/dev/loop4"
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             ],
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "lv_name": "ceph_lv1",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "lv_size": "21470642176",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "name": "ceph_lv1",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "tags": {
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.cluster_name": "ceph",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.crush_device_class": "",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.encrypted": "0",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.objectstore": "bluestore",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.osd_id": "1",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.type": "block",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.vdo": "0",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.with_tpm": "0"
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             },
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "type": "block",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "vg_name": "ceph_vg1"
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:         }
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:     ],
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:     "2": [
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:         {
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "devices": [
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "/dev/loop5"
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             ],
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "lv_name": "ceph_lv2",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "lv_size": "21470642176",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "name": "ceph_lv2",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "tags": {
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.cluster_name": "ceph",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.crush_device_class": "",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.encrypted": "0",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.objectstore": "bluestore",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.osd_id": "2",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.type": "block",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.vdo": "0",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:                 "ceph.with_tpm": "0"
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             },
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "type": "block",
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:             "vg_name": "ceph_vg2"
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:         }
Jan 29 09:41:41 compute-0 fervent_sammet[253076]:     ]
Jan 29 09:41:41 compute-0 fervent_sammet[253076]: }
Jan 29 09:41:41 compute-0 systemd[1]: libpod-9dab32ada9b3190c34a0892da1d41061f8451597894b3a069d41a9723c4e1e8e.scope: Deactivated successfully.
Jan 29 09:41:41 compute-0 podman[253059]: 2026-01-29 09:41:41.478516884 +0000 UTC m=+0.464254153 container died 9dab32ada9b3190c34a0892da1d41061f8451597894b3a069d41a9723c4e1e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 29 09:41:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae7de8b92653f9546ea03bdf5728eb5a77073b248bbcee3c8b4c4ebb48bc3355-merged.mount: Deactivated successfully.
Jan 29 09:41:41 compute-0 podman[253059]: 2026-01-29 09:41:41.563641244 +0000 UTC m=+0.549378513 container remove 9dab32ada9b3190c34a0892da1d41061f8451597894b3a069d41a9723c4e1e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:41:41 compute-0 systemd[1]: libpod-conmon-9dab32ada9b3190c34a0892da1d41061f8451597894b3a069d41a9723c4e1e8e.scope: Deactivated successfully.
Jan 29 09:41:41 compute-0 sudo[252979]: pam_unix(sudo:session): session closed for user root
Jan 29 09:41:41 compute-0 sudo[253098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:41:41 compute-0 sudo[253098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:41:41 compute-0 sudo[253098]: pam_unix(sudo:session): session closed for user root
Jan 29 09:41:41 compute-0 sudo[253123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:41:41 compute-0 sudo[253123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:41:41 compute-0 ceph-mon[75183]: pgmap v976: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:42 compute-0 podman[253160]: 2026-01-29 09:41:42.047034588 +0000 UTC m=+0.114222144 container create 0d39207c8f3c87d4de2edc393ce9e75dff50eb903ed41cfd5d089f8f16318fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 29 09:41:42 compute-0 podman[253160]: 2026-01-29 09:41:41.955217675 +0000 UTC m=+0.022405241 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:41:42 compute-0 systemd[1]: Started libpod-conmon-0d39207c8f3c87d4de2edc393ce9e75dff50eb903ed41cfd5d089f8f16318fdd.scope.
Jan 29 09:41:42 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:41:42 compute-0 podman[253160]: 2026-01-29 09:41:42.172942089 +0000 UTC m=+0.240129675 container init 0d39207c8f3c87d4de2edc393ce9e75dff50eb903ed41cfd5d089f8f16318fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_payne, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:41:42 compute-0 podman[253160]: 2026-01-29 09:41:42.178655684 +0000 UTC m=+0.245843230 container start 0d39207c8f3c87d4de2edc393ce9e75dff50eb903ed41cfd5d089f8f16318fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_payne, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:41:42 compute-0 competent_payne[253176]: 167 167
Jan 29 09:41:42 compute-0 podman[253160]: 2026-01-29 09:41:42.182371416 +0000 UTC m=+0.249558992 container attach 0d39207c8f3c87d4de2edc393ce9e75dff50eb903ed41cfd5d089f8f16318fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_payne, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 29 09:41:42 compute-0 systemd[1]: libpod-0d39207c8f3c87d4de2edc393ce9e75dff50eb903ed41cfd5d089f8f16318fdd.scope: Deactivated successfully.
Jan 29 09:41:42 compute-0 podman[253160]: 2026-01-29 09:41:42.184208076 +0000 UTC m=+0.251395622 container died 0d39207c8f3c87d4de2edc393ce9e75dff50eb903ed41cfd5d089f8f16318fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:41:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cebc3a87aa7dffadff1a222884d4abce17255f02f1c656be31901418ecceb97-merged.mount: Deactivated successfully.
Jan 29 09:41:42 compute-0 podman[253160]: 2026-01-29 09:41:42.325406614 +0000 UTC m=+0.392594160 container remove 0d39207c8f3c87d4de2edc393ce9e75dff50eb903ed41cfd5d089f8f16318fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_payne, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:41:42 compute-0 systemd[1]: libpod-conmon-0d39207c8f3c87d4de2edc393ce9e75dff50eb903ed41cfd5d089f8f16318fdd.scope: Deactivated successfully.
Jan 29 09:41:42 compute-0 podman[253199]: 2026-01-29 09:41:42.467373073 +0000 UTC m=+0.042007056 container create d52f620f6460d4158731c733da3a733581466ee3a92cac00b8a38442d4685471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:41:42 compute-0 podman[253199]: 2026-01-29 09:41:42.448312963 +0000 UTC m=+0.022946956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:41:42 compute-0 systemd[1]: Started libpod-conmon-d52f620f6460d4158731c733da3a733581466ee3a92cac00b8a38442d4685471.scope.
Jan 29 09:41:42 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8236bbd4eb617905cfa06b32fde02d21a959fe81f4b86cf0c833e2f68be70fb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8236bbd4eb617905cfa06b32fde02d21a959fe81f4b86cf0c833e2f68be70fb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8236bbd4eb617905cfa06b32fde02d21a959fe81f4b86cf0c833e2f68be70fb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8236bbd4eb617905cfa06b32fde02d21a959fe81f4b86cf0c833e2f68be70fb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:41:42 compute-0 podman[253199]: 2026-01-29 09:41:42.753613423 +0000 UTC m=+0.328247406 container init d52f620f6460d4158731c733da3a733581466ee3a92cac00b8a38442d4685471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 29 09:41:42 compute-0 podman[253199]: 2026-01-29 09:41:42.760329857 +0000 UTC m=+0.334963860 container start d52f620f6460d4158731c733da3a733581466ee3a92cac00b8a38442d4685471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_williamson, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 09:41:42 compute-0 podman[253199]: 2026-01-29 09:41:42.767580864 +0000 UTC m=+0.342214927 container attach d52f620f6460d4158731c733da3a733581466ee3a92cac00b8a38442d4685471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_williamson, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Jan 29 09:41:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v977: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:41:43 compute-0 lvm[253293]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:41:43 compute-0 lvm[253293]: VG ceph_vg0 finished
Jan 29 09:41:43 compute-0 lvm[253294]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:41:43 compute-0 lvm[253294]: VG ceph_vg1 finished
Jan 29 09:41:43 compute-0 lvm[253296]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:41:43 compute-0 lvm[253296]: VG ceph_vg2 finished
Jan 29 09:41:43 compute-0 epic_williamson[253215]: {}
Jan 29 09:41:43 compute-0 systemd[1]: libpod-d52f620f6460d4158731c733da3a733581466ee3a92cac00b8a38442d4685471.scope: Deactivated successfully.
Jan 29 09:41:43 compute-0 podman[253199]: 2026-01-29 09:41:43.523814003 +0000 UTC m=+1.098447966 container died d52f620f6460d4158731c733da3a733581466ee3a92cac00b8a38442d4685471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Jan 29 09:41:43 compute-0 systemd[1]: libpod-d52f620f6460d4158731c733da3a733581466ee3a92cac00b8a38442d4685471.scope: Consumed 1.043s CPU time.
Jan 29 09:41:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-8236bbd4eb617905cfa06b32fde02d21a959fe81f4b86cf0c833e2f68be70fb5-merged.mount: Deactivated successfully.
Jan 29 09:41:43 compute-0 podman[253199]: 2026-01-29 09:41:43.565579912 +0000 UTC m=+1.140213875 container remove d52f620f6460d4158731c733da3a733581466ee3a92cac00b8a38442d4685471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:41:43 compute-0 systemd[1]: libpod-conmon-d52f620f6460d4158731c733da3a733581466ee3a92cac00b8a38442d4685471.scope: Deactivated successfully.
Jan 29 09:41:43 compute-0 sudo[253123]: pam_unix(sudo:session): session closed for user root
Jan 29 09:41:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:41:43 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:41:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:41:43 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:41:43 compute-0 sudo[253313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:41:43 compute-0 sudo[253313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:41:43 compute-0 sudo[253313]: pam_unix(sudo:session): session closed for user root
Jan 29 09:41:44 compute-0 ceph-mon[75183]: pgmap v977: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:44 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:41:44 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:41:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v978: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:46 compute-0 ceph-mon[75183]: pgmap v978: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v979: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:47 compute-0 nova_compute[236255]: 2026-01-29 09:41:47.556 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:41:48 compute-0 ceph-mon[75183]: pgmap v979: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:41:48 compute-0 nova_compute[236255]: 2026-01-29 09:41:48.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:41:48 compute-0 nova_compute[236255]: 2026-01-29 09:41:48.555 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 29 09:41:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v980: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:50 compute-0 ceph-mon[75183]: pgmap v980: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v981: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 09:41:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4182081404' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:41:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 09:41:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4182081404' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:41:51 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/4182081404' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:41:51 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/4182081404' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:41:51 compute-0 nova_compute[236255]: 2026-01-29 09:41:51.551 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:41:51 compute-0 nova_compute[236255]: 2026-01-29 09:41:51.554 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:41:51 compute-0 nova_compute[236255]: 2026-01-29 09:41:51.554 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:41:52 compute-0 ceph-mon[75183]: pgmap v981: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:52 compute-0 nova_compute[236255]: 2026-01-29 09:41:52.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:41:52 compute-0 nova_compute[236255]: 2026-01-29 09:41:52.586 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:41:52 compute-0 nova_compute[236255]: 2026-01-29 09:41:52.586 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:41:52 compute-0 nova_compute[236255]: 2026-01-29 09:41:52.587 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:41:52 compute-0 nova_compute[236255]: 2026-01-29 09:41:52.587 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 29 09:41:52 compute-0 nova_compute[236255]: 2026-01-29 09:41:52.587 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:41:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v982: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:41:53 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/865516131' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:41:53 compute-0 nova_compute[236255]: 2026-01-29 09:41:53.095 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:41:53 compute-0 nova_compute[236255]: 2026-01-29 09:41:53.209 236262 WARNING nova.virt.libvirt.driver [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 29 09:41:53 compute-0 nova_compute[236255]: 2026-01-29 09:41:53.210 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5100MB free_disk=59.98826471157372GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 29 09:41:53 compute-0 nova_compute[236255]: 2026-01-29 09:41:53.210 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:41:53 compute-0 nova_compute[236255]: 2026-01-29 09:41:53.210 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:41:53 compute-0 nova_compute[236255]: 2026-01-29 09:41:53.291 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 29 09:41:53 compute-0 nova_compute[236255]: 2026-01-29 09:41:53.291 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 29 09:41:53 compute-0 nova_compute[236255]: 2026-01-29 09:41:53.315 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:41:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:41:53 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/865516131' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:41:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:41:53 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4229339948' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:41:53 compute-0 nova_compute[236255]: 2026-01-29 09:41:53.839 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:41:53 compute-0 nova_compute[236255]: 2026-01-29 09:41:53.843 236262 DEBUG nova.compute.provider_tree [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed in ProviderTree for provider: 2689825d-8fa0-473a-adf1-5005faba9bec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 29 09:41:53 compute-0 nova_compute[236255]: 2026-01-29 09:41:53.859 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed for provider 2689825d-8fa0-473a-adf1-5005faba9bec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 29 09:41:53 compute-0 nova_compute[236255]: 2026-01-29 09:41:53.861 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 29 09:41:53 compute-0 nova_compute[236255]: 2026-01-29 09:41:53.861 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:41:54 compute-0 ceph-mon[75183]: pgmap v982: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:54 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/4229339948' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:41:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v983: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:41:55.831480) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679715831522, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1409, "num_deletes": 251, "total_data_size": 1522564, "memory_usage": 1548888, "flush_reason": "Manual Compaction"}
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 29 09:41:55 compute-0 ceph-mon[75183]: pgmap v983: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679715863818, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1481994, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19191, "largest_seqno": 20599, "table_properties": {"data_size": 1475334, "index_size": 3799, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13792, "raw_average_key_size": 19, "raw_value_size": 1462013, "raw_average_value_size": 2103, "num_data_blocks": 174, "num_entries": 695, "num_filter_entries": 695, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769679569, "oldest_key_time": 1769679569, "file_creation_time": 1769679715, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 32388 microseconds, and 4088 cpu microseconds.
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:41:55.863865) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1481994 bytes OK
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:41:55.863884) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:41:55.877861) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:41:55.877922) EVENT_LOG_v1 {"time_micros": 1769679715877907, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:41:55.877955) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1516290, prev total WAL file size 1516290, number of live WAL files 2.
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:41:55.879009) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1447KB)], [47(5387KB)]
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679715879110, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 6998898, "oldest_snapshot_seqno": -1}
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4054 keys, 5748774 bytes, temperature: kUnknown
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679715953639, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 5748774, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5720157, "index_size": 17371, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 98170, "raw_average_key_size": 24, "raw_value_size": 5645797, "raw_average_value_size": 1392, "num_data_blocks": 738, "num_entries": 4054, "num_filter_entries": 4054, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769677896, "oldest_key_time": 0, "file_creation_time": 1769679715, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fd1ba9b2-de41-4690-b6ef-2d821bd14da8", "db_session_id": "ZPKAZ83W1X1TPV5WJTP0", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:41:55.953910) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 5748774 bytes
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:41:55.955445) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 93.9 rd, 77.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 5.3 +0.0 blob) out(5.5 +0.0 blob), read-write-amplify(8.6) write-amplify(3.9) OK, records in: 4568, records dropped: 514 output_compression: NoCompression
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:41:55.955468) EVENT_LOG_v1 {"time_micros": 1769679715955457, "job": 24, "event": "compaction_finished", "compaction_time_micros": 74567, "compaction_time_cpu_micros": 19566, "output_level": 6, "num_output_files": 1, "total_output_size": 5748774, "num_input_records": 4568, "num_output_records": 4054, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679715955840, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769679715956590, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:41:55.878821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:41:55.956728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:41:55.956735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:41:55.956738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:41:55.956740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:41:55 compute-0 ceph-mon[75183]: rocksdb: (Original Log Time 2026/01/29-09:41:55.956743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:41:56
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['backups', 'volumes', 'images', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms']
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:41:56 compute-0 nova_compute[236255]: 2026-01-29 09:41:56.861 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:41:56 compute-0 nova_compute[236255]: 2026-01-29 09:41:56.861 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 29 09:41:56 compute-0 nova_compute[236255]: 2026-01-29 09:41:56.862 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 29 09:41:56 compute-0 nova_compute[236255]: 2026-01-29 09:41:56.876 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 29 09:41:56 compute-0 nova_compute[236255]: 2026-01-29 09:41:56.876 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:41:56 compute-0 nova_compute[236255]: 2026-01-29 09:41:56.876 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:41:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v984: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:58 compute-0 ceph-mon[75183]: pgmap v984: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:41:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:41:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v985: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:00 compute-0 ceph-mon[75183]: pgmap v985: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v986: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:01 compute-0 podman[253382]: 2026-01-29 09:42:01.160808498 +0000 UTC m=+0.096170052 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 29 09:42:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:42:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:42:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:42:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:42:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.756942845403104e-07 of space, bias 1.0, pg target 8.270828536209312e-05 quantized to 32 (current 32)
Jan 29 09:42:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:42:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.5000327611377854e-07 of space, bias 1.0, pg target 4.500098283413356e-05 quantized to 32 (current 32)
Jan 29 09:42:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:42:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:42:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:42:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006683123502180493 of space, bias 1.0, pg target 0.2004937050654148 quantized to 32 (current 32)
Jan 29 09:42:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:42:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2051960589356773e-06 of space, bias 4.0, pg target 0.0014462352707228128 quantized to 16 (current 32)
Jan 29 09:42:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:42:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:42:02 compute-0 ceph-mon[75183]: pgmap v986: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v987: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:03 compute-0 podman[253409]: 2026-01-29 09:42:03.127865495 +0000 UTC m=+0.061913449 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 29 09:42:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:42:04 compute-0 ceph-mon[75183]: pgmap v987: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v988: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:06 compute-0 ceph-mon[75183]: pgmap v988: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v989: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:42:08 compute-0 ceph-mon[75183]: pgmap v989: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v990: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:42:09.048 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:42:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:42:09.048 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:42:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:42:09.048 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:42:10 compute-0 ceph-mon[75183]: pgmap v990: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v991: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:12 compute-0 ceph-mon[75183]: pgmap v991: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v992: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:42:14 compute-0 ceph-mon[75183]: pgmap v992: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v993: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:16 compute-0 ceph-mon[75183]: pgmap v993: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v994: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:42:18 compute-0 ceph-mon[75183]: pgmap v994: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v995: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:20 compute-0 ceph-mon[75183]: pgmap v995: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v996: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:22 compute-0 ceph-mon[75183]: pgmap v996: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v997: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:42:24 compute-0 ceph-mon[75183]: pgmap v997: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v998: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:26 compute-0 ceph-mon[75183]: pgmap v998: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:42:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:42:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:42:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:42:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:42:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:42:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v999: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:42:28 compute-0 ceph-mon[75183]: pgmap v999: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1000: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:30 compute-0 ceph-mon[75183]: pgmap v1000: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1001: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:31 compute-0 ceph-mon[75183]: pgmap v1001: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:32 compute-0 podman[253428]: 2026-01-29 09:42:32.138475125 +0000 UTC m=+0.081109621 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 29 09:42:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1002: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:42:34 compute-0 ceph-mon[75183]: pgmap v1002: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:34 compute-0 podman[253454]: 2026-01-29 09:42:34.144911065 +0000 UTC m=+0.084960786 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 29 09:42:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1003: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:36 compute-0 ceph-mon[75183]: pgmap v1003: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1004: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:38 compute-0 ceph-mon[75183]: pgmap v1004: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:42:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1005: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:40 compute-0 ceph-mon[75183]: pgmap v1005: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1006: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:42 compute-0 ceph-mon[75183]: pgmap v1006: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1007: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:42:43 compute-0 sudo[253473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:42:43 compute-0 sudo[253473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:42:43 compute-0 sudo[253473]: pam_unix(sudo:session): session closed for user root
Jan 29 09:42:43 compute-0 sudo[253498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:42:43 compute-0 sudo[253498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:42:44 compute-0 ceph-mon[75183]: pgmap v1007: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:44 compute-0 sudo[253498]: pam_unix(sudo:session): session closed for user root
Jan 29 09:42:44 compute-0 sudo[253554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:42:44 compute-0 sudo[253554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:42:44 compute-0 sudo[253554]: pam_unix(sudo:session): session closed for user root
Jan 29 09:42:44 compute-0 sudo[253579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 29 09:42:44 compute-0 sudo[253579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:42:44 compute-0 sudo[253579]: pam_unix(sudo:session): session closed for user root
Jan 29 09:42:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:42:44 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:42:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:42:44 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:42:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:42:44 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:42:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:42:44 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:42:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:42:44 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:42:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:42:44 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:42:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:42:44 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:42:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:42:44 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:42:44 compute-0 sudo[253622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:42:44 compute-0 sudo[253622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:42:44 compute-0 sudo[253622]: pam_unix(sudo:session): session closed for user root
Jan 29 09:42:44 compute-0 sudo[253647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:42:44 compute-0 sudo[253647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:42:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1008: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:45 compute-0 podman[253685]: 2026-01-29 09:42:45.156627516 +0000 UTC m=+0.073147604 container create 8c653b57754df8b3eaacd859f2517babefb2dc7be5bb455d0605d83c04004822 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:42:45 compute-0 podman[253685]: 2026-01-29 09:42:45.108804153 +0000 UTC m=+0.025324271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:42:45 compute-0 systemd[1]: Started libpod-conmon-8c653b57754df8b3eaacd859f2517babefb2dc7be5bb455d0605d83c04004822.scope.
Jan 29 09:42:45 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:42:45 compute-0 podman[253685]: 2026-01-29 09:42:45.313238024 +0000 UTC m=+0.229758142 container init 8c653b57754df8b3eaacd859f2517babefb2dc7be5bb455d0605d83c04004822 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_yalow, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:42:45 compute-0 podman[253685]: 2026-01-29 09:42:45.319206037 +0000 UTC m=+0.235726125 container start 8c653b57754df8b3eaacd859f2517babefb2dc7be5bb455d0605d83c04004822 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_yalow, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 29 09:42:45 compute-0 systemd[1]: libpod-8c653b57754df8b3eaacd859f2517babefb2dc7be5bb455d0605d83c04004822.scope: Deactivated successfully.
Jan 29 09:42:45 compute-0 vigilant_yalow[253701]: 167 167
Jan 29 09:42:45 compute-0 conmon[253701]: conmon 8c653b57754df8b3eaac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c653b57754df8b3eaacd859f2517babefb2dc7be5bb455d0605d83c04004822.scope/container/memory.events
Jan 29 09:42:45 compute-0 podman[253685]: 2026-01-29 09:42:45.358418395 +0000 UTC m=+0.274938503 container attach 8c653b57754df8b3eaacd859f2517babefb2dc7be5bb455d0605d83c04004822 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 29 09:42:45 compute-0 podman[253685]: 2026-01-29 09:42:45.359612168 +0000 UTC m=+0.276132296 container died 8c653b57754df8b3eaacd859f2517babefb2dc7be5bb455d0605d83c04004822 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 09:42:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ab59ade0320da1da6de303fd4d787a1563deb5565e635cd1b54e1508f83fad5-merged.mount: Deactivated successfully.
Jan 29 09:42:45 compute-0 podman[253685]: 2026-01-29 09:42:45.707061116 +0000 UTC m=+0.623581244 container remove 8c653b57754df8b3eaacd859f2517babefb2dc7be5bb455d0605d83c04004822 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_yalow, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 29 09:42:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:42:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:42:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:42:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:42:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:42:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:42:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:42:45 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:42:45 compute-0 ceph-mon[75183]: pgmap v1008: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:45 compute-0 systemd[1]: libpod-conmon-8c653b57754df8b3eaacd859f2517babefb2dc7be5bb455d0605d83c04004822.scope: Deactivated successfully.
Jan 29 09:42:45 compute-0 podman[253725]: 2026-01-29 09:42:45.956075622 +0000 UTC m=+0.128976906 container create 5f5e6af92e05ba9239428026e12712109d6b08b9051a1b4df08f7aaca510a62a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:42:45 compute-0 podman[253725]: 2026-01-29 09:42:45.867427166 +0000 UTC m=+0.040328510 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:42:46 compute-0 systemd[1]: Started libpod-conmon-5f5e6af92e05ba9239428026e12712109d6b08b9051a1b4df08f7aaca510a62a.scope.
Jan 29 09:42:46 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9fd228aeebee0257b95b4a7fdbe9fbe30380db2620337d7d711bce41330f133/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9fd228aeebee0257b95b4a7fdbe9fbe30380db2620337d7d711bce41330f133/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9fd228aeebee0257b95b4a7fdbe9fbe30380db2620337d7d711bce41330f133/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9fd228aeebee0257b95b4a7fdbe9fbe30380db2620337d7d711bce41330f133/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9fd228aeebee0257b95b4a7fdbe9fbe30380db2620337d7d711bce41330f133/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:42:46 compute-0 podman[253725]: 2026-01-29 09:42:46.141614498 +0000 UTC m=+0.314515832 container init 5f5e6af92e05ba9239428026e12712109d6b08b9051a1b4df08f7aaca510a62a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 09:42:46 compute-0 podman[253725]: 2026-01-29 09:42:46.149086872 +0000 UTC m=+0.321988116 container start 5f5e6af92e05ba9239428026e12712109d6b08b9051a1b4df08f7aaca510a62a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_euler, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 09:42:46 compute-0 podman[253725]: 2026-01-29 09:42:46.222245916 +0000 UTC m=+0.395147270 container attach 5f5e6af92e05ba9239428026e12712109d6b08b9051a1b4df08f7aaca510a62a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 29 09:42:46 compute-0 loving_euler[253742]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:42:46 compute-0 loving_euler[253742]: --> All data devices are unavailable
Jan 29 09:42:46 compute-0 systemd[1]: libpod-5f5e6af92e05ba9239428026e12712109d6b08b9051a1b4df08f7aaca510a62a.scope: Deactivated successfully.
Jan 29 09:42:46 compute-0 podman[253725]: 2026-01-29 09:42:46.548510807 +0000 UTC m=+0.721412051 container died 5f5e6af92e05ba9239428026e12712109d6b08b9051a1b4df08f7aaca510a62a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_euler, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:42:46 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:42:46 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 5493 writes, 23K keys, 5493 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5493 writes, 912 syncs, 6.02 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1189 writes, 3320 keys, 1189 commit groups, 1.0 writes per commit group, ingest: 1.85 MB, 0.00 MB/s
                                           Interval WAL: 1189 writes, 512 syncs, 2.32 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 29 09:42:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9fd228aeebee0257b95b4a7fdbe9fbe30380db2620337d7d711bce41330f133-merged.mount: Deactivated successfully.
Jan 29 09:42:46 compute-0 podman[253725]: 2026-01-29 09:42:46.977036595 +0000 UTC m=+1.149937839 container remove 5f5e6af92e05ba9239428026e12712109d6b08b9051a1b4df08f7aaca510a62a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_euler, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 09:42:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1009: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:47 compute-0 systemd[1]: libpod-conmon-5f5e6af92e05ba9239428026e12712109d6b08b9051a1b4df08f7aaca510a62a.scope: Deactivated successfully.
Jan 29 09:42:47 compute-0 sudo[253647]: pam_unix(sudo:session): session closed for user root
Jan 29 09:42:47 compute-0 sudo[253775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:42:47 compute-0 sudo[253775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:42:47 compute-0 sudo[253775]: pam_unix(sudo:session): session closed for user root
Jan 29 09:42:47 compute-0 sudo[253800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:42:47 compute-0 sudo[253800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:42:47 compute-0 podman[253838]: 2026-01-29 09:42:47.466747761 +0000 UTC m=+0.099277956 container create 726b5f3b27a7cde9077da1202ccf52325172545356964e88b2f703bec2148877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 29 09:42:47 compute-0 podman[253838]: 2026-01-29 09:42:47.387845701 +0000 UTC m=+0.020375916 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:42:47 compute-0 systemd[1]: Started libpod-conmon-726b5f3b27a7cde9077da1202ccf52325172545356964e88b2f703bec2148877.scope.
Jan 29 09:42:47 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:42:47 compute-0 podman[253838]: 2026-01-29 09:42:47.599367765 +0000 UTC m=+0.231897970 container init 726b5f3b27a7cde9077da1202ccf52325172545356964e88b2f703bec2148877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_nash, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:42:47 compute-0 podman[253838]: 2026-01-29 09:42:47.606299164 +0000 UTC m=+0.238829369 container start 726b5f3b27a7cde9077da1202ccf52325172545356964e88b2f703bec2148877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_nash, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:42:47 compute-0 trusting_nash[253855]: 167 167
Jan 29 09:42:47 compute-0 systemd[1]: libpod-726b5f3b27a7cde9077da1202ccf52325172545356964e88b2f703bec2148877.scope: Deactivated successfully.
Jan 29 09:42:47 compute-0 podman[253838]: 2026-01-29 09:42:47.66927224 +0000 UTC m=+0.301802435 container attach 726b5f3b27a7cde9077da1202ccf52325172545356964e88b2f703bec2148877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_nash, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:42:47 compute-0 podman[253838]: 2026-01-29 09:42:47.669826016 +0000 UTC m=+0.302356211 container died 726b5f3b27a7cde9077da1202ccf52325172545356964e88b2f703bec2148877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_nash, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 29 09:42:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-02c7053e8ae3bd45ee86e0b0203ed99bd7a19e515d17cf49b490d7e9384a0410-merged.mount: Deactivated successfully.
Jan 29 09:42:48 compute-0 podman[253838]: 2026-01-29 09:42:48.149776435 +0000 UTC m=+0.782306650 container remove 726b5f3b27a7cde9077da1202ccf52325172545356964e88b2f703bec2148877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:42:48 compute-0 systemd[1]: libpod-conmon-726b5f3b27a7cde9077da1202ccf52325172545356964e88b2f703bec2148877.scope: Deactivated successfully.
Jan 29 09:42:48 compute-0 ceph-mon[75183]: pgmap v1009: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:48 compute-0 podman[253879]: 2026-01-29 09:42:48.281585737 +0000 UTC m=+0.046685483 container create 795cea87d4f1094097cb62d6fcc5bb0da396305c9196274f7e85fbbe31ce106a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_banzai, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:42:48 compute-0 systemd[1]: Started libpod-conmon-795cea87d4f1094097cb62d6fcc5bb0da396305c9196274f7e85fbbe31ce106a.scope.
Jan 29 09:42:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:42:48 compute-0 podman[253879]: 2026-01-29 09:42:48.262169578 +0000 UTC m=+0.027269334 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:42:48 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/513d1f0493ddea0daff213e027c6b1f1e607893ebb23480ab9bab78857bf5bbc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/513d1f0493ddea0daff213e027c6b1f1e607893ebb23480ab9bab78857bf5bbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/513d1f0493ddea0daff213e027c6b1f1e607893ebb23480ab9bab78857bf5bbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/513d1f0493ddea0daff213e027c6b1f1e607893ebb23480ab9bab78857bf5bbc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:42:48 compute-0 podman[253879]: 2026-01-29 09:42:48.412749432 +0000 UTC m=+0.177849198 container init 795cea87d4f1094097cb62d6fcc5bb0da396305c9196274f7e85fbbe31ce106a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_banzai, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:42:48 compute-0 podman[253879]: 2026-01-29 09:42:48.420253866 +0000 UTC m=+0.185353602 container start 795cea87d4f1094097cb62d6fcc5bb0da396305c9196274f7e85fbbe31ce106a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_banzai, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 09:42:48 compute-0 podman[253879]: 2026-01-29 09:42:48.432262253 +0000 UTC m=+0.197361989 container attach 795cea87d4f1094097cb62d6fcc5bb0da396305c9196274f7e85fbbe31ce106a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:42:48 compute-0 nova_compute[236255]: 2026-01-29 09:42:48.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:42:48 compute-0 nova_compute[236255]: 2026-01-29 09:42:48.556 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 29 09:42:48 compute-0 practical_banzai[253896]: {
Jan 29 09:42:48 compute-0 practical_banzai[253896]:     "0": [
Jan 29 09:42:48 compute-0 practical_banzai[253896]:         {
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "devices": [
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "/dev/loop3"
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             ],
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "lv_name": "ceph_lv0",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "lv_size": "21470642176",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "name": "ceph_lv0",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "tags": {
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.cluster_name": "ceph",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.crush_device_class": "",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.encrypted": "0",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.objectstore": "bluestore",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.osd_id": "0",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.type": "block",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.vdo": "0",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.with_tpm": "0"
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             },
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "type": "block",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "vg_name": "ceph_vg0"
Jan 29 09:42:48 compute-0 practical_banzai[253896]:         }
Jan 29 09:42:48 compute-0 practical_banzai[253896]:     ],
Jan 29 09:42:48 compute-0 practical_banzai[253896]:     "1": [
Jan 29 09:42:48 compute-0 practical_banzai[253896]:         {
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "devices": [
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "/dev/loop4"
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             ],
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "lv_name": "ceph_lv1",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "lv_size": "21470642176",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "name": "ceph_lv1",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "tags": {
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.cluster_name": "ceph",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.crush_device_class": "",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.encrypted": "0",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.objectstore": "bluestore",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.osd_id": "1",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.type": "block",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.vdo": "0",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.with_tpm": "0"
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             },
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "type": "block",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "vg_name": "ceph_vg1"
Jan 29 09:42:48 compute-0 practical_banzai[253896]:         }
Jan 29 09:42:48 compute-0 practical_banzai[253896]:     ],
Jan 29 09:42:48 compute-0 practical_banzai[253896]:     "2": [
Jan 29 09:42:48 compute-0 practical_banzai[253896]:         {
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "devices": [
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "/dev/loop5"
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             ],
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "lv_name": "ceph_lv2",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "lv_size": "21470642176",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "name": "ceph_lv2",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "tags": {
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.cluster_name": "ceph",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.crush_device_class": "",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.encrypted": "0",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.objectstore": "bluestore",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.osd_id": "2",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.type": "block",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.vdo": "0",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:                 "ceph.with_tpm": "0"
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             },
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "type": "block",
Jan 29 09:42:48 compute-0 practical_banzai[253896]:             "vg_name": "ceph_vg2"
Jan 29 09:42:48 compute-0 practical_banzai[253896]:         }
Jan 29 09:42:48 compute-0 practical_banzai[253896]:     ]
Jan 29 09:42:48 compute-0 practical_banzai[253896]: }
Jan 29 09:42:48 compute-0 systemd[1]: libpod-795cea87d4f1094097cb62d6fcc5bb0da396305c9196274f7e85fbbe31ce106a.scope: Deactivated successfully.
Jan 29 09:42:48 compute-0 conmon[253896]: conmon 795cea87d4f1094097cb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-795cea87d4f1094097cb62d6fcc5bb0da396305c9196274f7e85fbbe31ce106a.scope/container/memory.events
Jan 29 09:42:48 compute-0 podman[253879]: 2026-01-29 09:42:48.762434061 +0000 UTC m=+0.527533837 container died 795cea87d4f1094097cb62d6fcc5bb0da396305c9196274f7e85fbbe31ce106a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 09:42:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1010: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-513d1f0493ddea0daff213e027c6b1f1e607893ebb23480ab9bab78857bf5bbc-merged.mount: Deactivated successfully.
Jan 29 09:42:49 compute-0 podman[253879]: 2026-01-29 09:42:49.204773285 +0000 UTC m=+0.969873061 container remove 795cea87d4f1094097cb62d6fcc5bb0da396305c9196274f7e85fbbe31ce106a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_banzai, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:42:49 compute-0 systemd[1]: libpod-conmon-795cea87d4f1094097cb62d6fcc5bb0da396305c9196274f7e85fbbe31ce106a.scope: Deactivated successfully.
Jan 29 09:42:49 compute-0 sudo[253800]: pam_unix(sudo:session): session closed for user root
Jan 29 09:42:49 compute-0 sudo[253918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:42:49 compute-0 sudo[253918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:42:49 compute-0 sudo[253918]: pam_unix(sudo:session): session closed for user root
Jan 29 09:42:49 compute-0 sudo[253943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:42:49 compute-0 sudo[253943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:42:49 compute-0 nova_compute[236255]: 2026-01-29 09:42:49.557 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:42:49 compute-0 podman[253980]: 2026-01-29 09:42:49.646468392 +0000 UTC m=+0.019748149 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:42:49 compute-0 podman[253980]: 2026-01-29 09:42:49.755653298 +0000 UTC m=+0.128933065 container create fdadd1dcbf1de4a95c6d39e5e008e788c0dd301896d12a317b3721833a71450c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_carson, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 29 09:42:49 compute-0 systemd[1]: Started libpod-conmon-fdadd1dcbf1de4a95c6d39e5e008e788c0dd301896d12a317b3721833a71450c.scope.
Jan 29 09:42:49 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:42:49 compute-0 podman[253980]: 2026-01-29 09:42:49.841675562 +0000 UTC m=+0.214955329 container init fdadd1dcbf1de4a95c6d39e5e008e788c0dd301896d12a317b3721833a71450c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:42:49 compute-0 podman[253980]: 2026-01-29 09:42:49.849504165 +0000 UTC m=+0.222783922 container start fdadd1dcbf1de4a95c6d39e5e008e788c0dd301896d12a317b3721833a71450c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_carson, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 09:42:49 compute-0 fervent_carson[253996]: 167 167
Jan 29 09:42:49 compute-0 systemd[1]: libpod-fdadd1dcbf1de4a95c6d39e5e008e788c0dd301896d12a317b3721833a71450c.scope: Deactivated successfully.
Jan 29 09:42:49 compute-0 podman[253980]: 2026-01-29 09:42:49.903448336 +0000 UTC m=+0.276728083 container attach fdadd1dcbf1de4a95c6d39e5e008e788c0dd301896d12a317b3721833a71450c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_carson, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:42:49 compute-0 podman[253980]: 2026-01-29 09:42:49.904236987 +0000 UTC m=+0.277516744 container died fdadd1dcbf1de4a95c6d39e5e008e788c0dd301896d12a317b3721833a71450c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_carson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:42:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6f9412d1d00350b5865a2195af7525b978babc3e59aca6ec1ad231bf70e1683-merged.mount: Deactivated successfully.
Jan 29 09:42:50 compute-0 podman[253980]: 2026-01-29 09:42:50.104365701 +0000 UTC m=+0.477645458 container remove fdadd1dcbf1de4a95c6d39e5e008e788c0dd301896d12a317b3721833a71450c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_carson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:42:50 compute-0 systemd[1]: libpod-conmon-fdadd1dcbf1de4a95c6d39e5e008e788c0dd301896d12a317b3721833a71450c.scope: Deactivated successfully.
Jan 29 09:42:50 compute-0 podman[254021]: 2026-01-29 09:42:50.227866157 +0000 UTC m=+0.048016130 container create c95ec7f2584530b533b508aa94868936cfdaf420e224c154ca3be6aa535101cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_edison, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:42:50 compute-0 systemd[1]: Started libpod-conmon-c95ec7f2584530b533b508aa94868936cfdaf420e224c154ca3be6aa535101cd.scope.
Jan 29 09:42:50 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aa465a6f915c510de4323413d4cd55f0ea8e67b7c650a921106bc5ffa5a9911/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aa465a6f915c510de4323413d4cd55f0ea8e67b7c650a921106bc5ffa5a9911/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aa465a6f915c510de4323413d4cd55f0ea8e67b7c650a921106bc5ffa5a9911/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aa465a6f915c510de4323413d4cd55f0ea8e67b7c650a921106bc5ffa5a9911/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:42:50 compute-0 podman[254021]: 2026-01-29 09:42:50.201988661 +0000 UTC m=+0.022138664 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:42:50 compute-0 ceph-mon[75183]: pgmap v1010: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:50 compute-0 podman[254021]: 2026-01-29 09:42:50.326512315 +0000 UTC m=+0.146662348 container init c95ec7f2584530b533b508aa94868936cfdaf420e224c154ca3be6aa535101cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:42:50 compute-0 podman[254021]: 2026-01-29 09:42:50.332124138 +0000 UTC m=+0.152274141 container start c95ec7f2584530b533b508aa94868936cfdaf420e224c154ca3be6aa535101cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_edison, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:42:50 compute-0 podman[254021]: 2026-01-29 09:42:50.349617085 +0000 UTC m=+0.169767058 container attach c95ec7f2584530b533b508aa94868936cfdaf420e224c154ca3be6aa535101cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 29 09:42:50 compute-0 lvm[254117]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:42:50 compute-0 lvm[254116]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:42:50 compute-0 lvm[254117]: VG ceph_vg1 finished
Jan 29 09:42:50 compute-0 lvm[254116]: VG ceph_vg0 finished
Jan 29 09:42:50 compute-0 lvm[254119]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:42:50 compute-0 lvm[254119]: VG ceph_vg2 finished
Jan 29 09:42:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1011: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:51 compute-0 busy_edison[254038]: {}
Jan 29 09:42:51 compute-0 podman[254021]: 2026-01-29 09:42:51.097188548 +0000 UTC m=+0.917338531 container died c95ec7f2584530b533b508aa94868936cfdaf420e224c154ca3be6aa535101cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_edison, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:42:51 compute-0 systemd[1]: libpod-c95ec7f2584530b533b508aa94868936cfdaf420e224c154ca3be6aa535101cd.scope: Deactivated successfully.
Jan 29 09:42:51 compute-0 systemd[1]: libpod-c95ec7f2584530b533b508aa94868936cfdaf420e224c154ca3be6aa535101cd.scope: Consumed 1.094s CPU time.
Jan 29 09:42:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-6aa465a6f915c510de4323413d4cd55f0ea8e67b7c650a921106bc5ffa5a9911-merged.mount: Deactivated successfully.
Jan 29 09:42:51 compute-0 podman[254021]: 2026-01-29 09:42:51.180340674 +0000 UTC m=+1.000490657 container remove c95ec7f2584530b533b508aa94868936cfdaf420e224c154ca3be6aa535101cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_edison, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:42:51 compute-0 systemd[1]: libpod-conmon-c95ec7f2584530b533b508aa94868936cfdaf420e224c154ca3be6aa535101cd.scope: Deactivated successfully.
Jan 29 09:42:51 compute-0 sudo[253943]: pam_unix(sudo:session): session closed for user root
Jan 29 09:42:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:42:51 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:42:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:42:51 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:42:51 compute-0 sudo[254136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:42:51 compute-0 sudo[254136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:42:51 compute-0 sudo[254136]: pam_unix(sudo:session): session closed for user root
Jan 29 09:42:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 09:42:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/532309355' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:42:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 09:42:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/532309355' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:42:51 compute-0 nova_compute[236255]: 2026-01-29 09:42:51.550 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:42:52 compute-0 ceph-mon[75183]: pgmap v1011: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:52 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:42:52 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:42:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/532309355' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:42:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/532309355' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:42:52 compute-0 nova_compute[236255]: 2026-01-29 09:42:52.550 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:42:52 compute-0 nova_compute[236255]: 2026-01-29 09:42:52.572 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:42:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1012: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:42:53 compute-0 nova_compute[236255]: 2026-01-29 09:42:53.556 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:42:54 compute-0 ceph-mon[75183]: pgmap v1012: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:54 compute-0 nova_compute[236255]: 2026-01-29 09:42:54.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:42:54 compute-0 nova_compute[236255]: 2026-01-29 09:42:54.589 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:42:54 compute-0 nova_compute[236255]: 2026-01-29 09:42:54.590 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:42:54 compute-0 nova_compute[236255]: 2026-01-29 09:42:54.590 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:42:54 compute-0 nova_compute[236255]: 2026-01-29 09:42:54.590 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 29 09:42:54 compute-0 nova_compute[236255]: 2026-01-29 09:42:54.590 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:42:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1013: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:55 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:42:55 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3016792000' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:42:55 compute-0 nova_compute[236255]: 2026-01-29 09:42:55.180 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:42:55 compute-0 nova_compute[236255]: 2026-01-29 09:42:55.303 236262 WARNING nova.virt.libvirt.driver [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 29 09:42:55 compute-0 nova_compute[236255]: 2026-01-29 09:42:55.304 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5102MB free_disk=59.98826471157372GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 29 09:42:55 compute-0 nova_compute[236255]: 2026-01-29 09:42:55.304 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:42:55 compute-0 nova_compute[236255]: 2026-01-29 09:42:55.304 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:42:55 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3016792000' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:42:55 compute-0 nova_compute[236255]: 2026-01-29 09:42:55.383 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 29 09:42:55 compute-0 nova_compute[236255]: 2026-01-29 09:42:55.383 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 29 09:42:55 compute-0 nova_compute[236255]: 2026-01-29 09:42:55.403 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:42:56
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['.mgr', 'backups', 'volumes', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images']
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:42:56 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:42:56 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1616116147' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:42:56 compute-0 nova_compute[236255]: 2026-01-29 09:42:56.071 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.668s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:42:56 compute-0 nova_compute[236255]: 2026-01-29 09:42:56.076 236262 DEBUG nova.compute.provider_tree [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed in ProviderTree for provider: 2689825d-8fa0-473a-adf1-5005faba9bec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 29 09:42:56 compute-0 nova_compute[236255]: 2026-01-29 09:42:56.091 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed for provider 2689825d-8fa0-473a-adf1-5005faba9bec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 29 09:42:56 compute-0 nova_compute[236255]: 2026-01-29 09:42:56.092 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 29 09:42:56 compute-0 nova_compute[236255]: 2026-01-29 09:42:56.093 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:42:56 compute-0 ceph-mon[75183]: pgmap v1013: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:56 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1616116147' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:42:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1014: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:57 compute-0 nova_compute[236255]: 2026-01-29 09:42:57.093 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:42:57 compute-0 nova_compute[236255]: 2026-01-29 09:42:57.094 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 29 09:42:57 compute-0 nova_compute[236255]: 2026-01-29 09:42:57.094 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 29 09:42:57 compute-0 nova_compute[236255]: 2026-01-29 09:42:57.110 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 29 09:42:57 compute-0 nova_compute[236255]: 2026-01-29 09:42:57.110 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:42:57 compute-0 nova_compute[236255]: 2026-01-29 09:42:57.112 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:42:58 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:42:58 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1801.9 total, 600.0 interval
                                           Cumulative writes: 6065 writes, 24K keys, 6065 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6065 writes, 1167 syncs, 5.20 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1663 writes, 4565 keys, 1663 commit groups, 1.0 writes per commit group, ingest: 2.52 MB, 0.00 MB/s
                                           Interval WAL: 1663 writes, 727 syncs, 2.29 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 29 09:42:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:42:58 compute-0 ceph-mon[75183]: pgmap v1014: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:42:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1015: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:00 compute-0 ceph-mon[75183]: pgmap v1015: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1016: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:43:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:43:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:43:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:43:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.756942845403104e-07 of space, bias 1.0, pg target 8.270828536209312e-05 quantized to 32 (current 32)
Jan 29 09:43:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:43:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.5000327611377854e-07 of space, bias 1.0, pg target 4.500098283413356e-05 quantized to 32 (current 32)
Jan 29 09:43:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:43:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:43:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:43:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006683123502180493 of space, bias 1.0, pg target 0.2004937050654148 quantized to 32 (current 32)
Jan 29 09:43:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:43:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2051960589356773e-06 of space, bias 4.0, pg target 0.0014462352707228128 quantized to 16 (current 32)
Jan 29 09:43:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:43:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:43:02 compute-0 ceph-mon[75183]: pgmap v1016: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1017: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:03 compute-0 podman[254205]: 2026-01-29 09:43:03.164541741 +0000 UTC m=+0.106683568 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 29 09:43:03 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:43:04 compute-0 ceph-mon[75183]: pgmap v1017: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:04 compute-0 podman[254232]: 2026-01-29 09:43:04.856217942 +0000 UTC m=+0.044936235 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 29 09:43:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1018: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:06 compute-0 ceph-mon[75183]: pgmap v1018: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1019: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:07 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:43:07 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1801.6 total, 600.0 interval
                                           Cumulative writes: 5841 writes, 24K keys, 5841 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5841 writes, 1098 syncs, 5.32 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1592 writes, 4293 keys, 1592 commit groups, 1.0 writes per commit group, ingest: 2.20 MB, 0.00 MB/s
                                           Interval WAL: 1592 writes, 710 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 29 09:43:08 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:43:08 compute-0 ceph-mon[75183]: pgmap v1019: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1020: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:43:09.049 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:43:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:43:09.050 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:43:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:43:09.050 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:43:10 compute-0 ceph-mon[75183]: pgmap v1020: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1021: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:12 compute-0 ceph-mon[75183]: pgmap v1021: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1022: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:13 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:43:13 compute-0 ceph-mon[75183]: pgmap v1022: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1023: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:16 compute-0 ceph-mon[75183]: pgmap v1023: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:16 compute-0 ceph-mgr[75473]: [devicehealth INFO root] Check health
Jan 29 09:43:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1024: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:18 compute-0 ceph-mon[75183]: pgmap v1024: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:18 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:43:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1025: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:20 compute-0 ceph-mon[75183]: pgmap v1025: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1026: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:22 compute-0 ceph-mon[75183]: pgmap v1026: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1027: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:23 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:43:24 compute-0 ceph-mon[75183]: pgmap v1027: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1028: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:26 compute-0 ceph-mon[75183]: pgmap v1028: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:43:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:43:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:43:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:43:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:43:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:43:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1029: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:28 compute-0 ceph-mon[75183]: pgmap v1029: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:28 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:43:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1030: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1031: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1032: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:34 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:43:34 compute-0 podman[254252]: 2026-01-29 09:43:34.123818624 +0000 UTC m=+0.067559931 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 29 09:43:34 compute-0 ceph-mon[75183]: pgmap v1030: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1033: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:35 compute-0 podman[254278]: 2026-01-29 09:43:35.126604816 +0000 UTC m=+0.064987981 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 29 09:43:35 compute-0 ceph-mon[75183]: pgmap v1031: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:35 compute-0 ceph-mon[75183]: pgmap v1032: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:36 compute-0 ceph-mon[75183]: pgmap v1033: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1034: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:38 compute-0 ceph-mon[75183]: pgmap v1034: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1035: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:43:40 compute-0 ceph-mon[75183]: pgmap v1035: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1036: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:42 compute-0 ceph-mon[75183]: pgmap v1036: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1037: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:43:44 compute-0 ceph-mon[75183]: pgmap v1037: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1038: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:46 compute-0 ceph-mon[75183]: pgmap v1038: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1039: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:48 compute-0 nova_compute[236255]: 2026-01-29 09:43:48.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:43:48 compute-0 ceph-mon[75183]: pgmap v1039: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1040: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:43:49 compute-0 nova_compute[236255]: 2026-01-29 09:43:49.566 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:43:49 compute-0 nova_compute[236255]: 2026-01-29 09:43:49.566 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 29 09:43:49 compute-0 ceph-mon[75183]: pgmap v1040: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:50 compute-0 nova_compute[236255]: 2026-01-29 09:43:50.556 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:43:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1041: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:51 compute-0 sudo[254297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:43:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 09:43:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1169761659' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:43:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 09:43:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1169761659' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:43:51 compute-0 sudo[254297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:43:51 compute-0 sudo[254297]: pam_unix(sudo:session): session closed for user root
Jan 29 09:43:51 compute-0 sudo[254322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 29 09:43:51 compute-0 sudo[254322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:43:51 compute-0 nova_compute[236255]: 2026-01-29 09:43:51.550 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:43:51 compute-0 sudo[254322]: pam_unix(sudo:session): session closed for user root
Jan 29 09:43:51 compute-0 sudo[254378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:43:51 compute-0 sudo[254378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:43:51 compute-0 sudo[254378]: pam_unix(sudo:session): session closed for user root
Jan 29 09:43:51 compute-0 sudo[254403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- inventory --format=json-pretty --filter-for-batch
Jan 29 09:43:51 compute-0 sudo[254403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:43:52 compute-0 ceph-mon[75183]: pgmap v1041: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/1169761659' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:43:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/1169761659' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:43:52 compute-0 podman[254440]: 2026-01-29 09:43:52.152053448 +0000 UTC m=+0.032294161 container create fc6dd7fad877a27e3597ddf4cfbaa530019012e0080f1c9e4dad563524651c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 29 09:43:52 compute-0 systemd[1]: Started libpod-conmon-fc6dd7fad877a27e3597ddf4cfbaa530019012e0080f1c9e4dad563524651c06.scope.
Jan 29 09:43:52 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:43:52 compute-0 podman[254440]: 2026-01-29 09:43:52.222813705 +0000 UTC m=+0.103054438 container init fc6dd7fad877a27e3597ddf4cfbaa530019012e0080f1c9e4dad563524651c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ride, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:43:52 compute-0 podman[254440]: 2026-01-29 09:43:52.227559084 +0000 UTC m=+0.107799797 container start fc6dd7fad877a27e3597ddf4cfbaa530019012e0080f1c9e4dad563524651c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:43:52 compute-0 podman[254440]: 2026-01-29 09:43:52.230995078 +0000 UTC m=+0.111235991 container attach fc6dd7fad877a27e3597ddf4cfbaa530019012e0080f1c9e4dad563524651c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 29 09:43:52 compute-0 quirky_ride[254455]: 167 167
Jan 29 09:43:52 compute-0 systemd[1]: libpod-fc6dd7fad877a27e3597ddf4cfbaa530019012e0080f1c9e4dad563524651c06.scope: Deactivated successfully.
Jan 29 09:43:52 compute-0 podman[254440]: 2026-01-29 09:43:52.136321029 +0000 UTC m=+0.016561762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:43:52 compute-0 conmon[254455]: conmon fc6dd7fad877a27e3597 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc6dd7fad877a27e3597ddf4cfbaa530019012e0080f1c9e4dad563524651c06.scope/container/memory.events
Jan 29 09:43:52 compute-0 podman[254440]: 2026-01-29 09:43:52.234813212 +0000 UTC m=+0.115053925 container died fc6dd7fad877a27e3597ddf4cfbaa530019012e0080f1c9e4dad563524651c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ride, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 09:43:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0a4ed9ed35535b7f163efd79ebfc7eb658c833761467de6dc02fbd07c193694-merged.mount: Deactivated successfully.
Jan 29 09:43:52 compute-0 podman[254440]: 2026-01-29 09:43:52.274442101 +0000 UTC m=+0.154682814 container remove fc6dd7fad877a27e3597ddf4cfbaa530019012e0080f1c9e4dad563524651c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ride, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 29 09:43:52 compute-0 systemd[1]: libpod-conmon-fc6dd7fad877a27e3597ddf4cfbaa530019012e0080f1c9e4dad563524651c06.scope: Deactivated successfully.
Jan 29 09:43:52 compute-0 podman[254479]: 2026-01-29 09:43:52.392708152 +0000 UTC m=+0.035966880 container create e8878f52ab6dc0a31e90e04eb2e2a4124e78ce3698e4648f052a9a7767027ba2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_colden, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 09:43:52 compute-0 systemd[1]: Started libpod-conmon-e8878f52ab6dc0a31e90e04eb2e2a4124e78ce3698e4648f052a9a7767027ba2.scope.
Jan 29 09:43:52 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f37301b7c5ba1144842788ba03b25bfd70c2bfe44febaf2113c6ca652a1bdd66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f37301b7c5ba1144842788ba03b25bfd70c2bfe44febaf2113c6ca652a1bdd66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f37301b7c5ba1144842788ba03b25bfd70c2bfe44febaf2113c6ca652a1bdd66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f37301b7c5ba1144842788ba03b25bfd70c2bfe44febaf2113c6ca652a1bdd66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:43:52 compute-0 podman[254479]: 2026-01-29 09:43:52.473259016 +0000 UTC m=+0.116517794 container init e8878f52ab6dc0a31e90e04eb2e2a4124e78ce3698e4648f052a9a7767027ba2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_colden, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:43:52 compute-0 podman[254479]: 2026-01-29 09:43:52.378528206 +0000 UTC m=+0.021786954 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:43:52 compute-0 podman[254479]: 2026-01-29 09:43:52.477772639 +0000 UTC m=+0.121031367 container start e8878f52ab6dc0a31e90e04eb2e2a4124e78ce3698e4648f052a9a7767027ba2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_colden, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 29 09:43:52 compute-0 podman[254479]: 2026-01-29 09:43:52.482402185 +0000 UTC m=+0.125660933 container attach e8878f52ab6dc0a31e90e04eb2e2a4124e78ce3698e4648f052a9a7767027ba2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_colden, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 29 09:43:52 compute-0 nova_compute[236255]: 2026-01-29 09:43:52.555 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:43:52 compute-0 nova_compute[236255]: 2026-01-29 09:43:52.558 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:43:52 compute-0 nova_compute[236255]: 2026-01-29 09:43:52.559 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 29 09:43:52 compute-0 nova_compute[236255]: 2026-01-29 09:43:52.586 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 29 09:43:52 compute-0 sweet_colden[254495]: [
Jan 29 09:43:52 compute-0 sweet_colden[254495]:     {
Jan 29 09:43:52 compute-0 sweet_colden[254495]:         "available": false,
Jan 29 09:43:52 compute-0 sweet_colden[254495]:         "being_replaced": false,
Jan 29 09:43:52 compute-0 sweet_colden[254495]:         "ceph_device_lvm": false,
Jan 29 09:43:52 compute-0 sweet_colden[254495]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 29 09:43:52 compute-0 sweet_colden[254495]:         "lsm_data": {},
Jan 29 09:43:52 compute-0 sweet_colden[254495]:         "lvs": [],
Jan 29 09:43:52 compute-0 sweet_colden[254495]:         "path": "/dev/sr0",
Jan 29 09:43:52 compute-0 sweet_colden[254495]:         "rejected_reasons": [
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "Insufficient space (<5GB)",
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "Has a FileSystem"
Jan 29 09:43:52 compute-0 sweet_colden[254495]:         ],
Jan 29 09:43:52 compute-0 sweet_colden[254495]:         "sys_api": {
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "actuators": null,
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "device_nodes": [
Jan 29 09:43:52 compute-0 sweet_colden[254495]:                 "sr0"
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             ],
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "devname": "sr0",
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "human_readable_size": "482.00 KB",
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "id_bus": "ata",
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "model": "QEMU DVD-ROM",
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "nr_requests": "2",
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "parent": "/dev/sr0",
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "partitions": {},
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "path": "/dev/sr0",
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "removable": "1",
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "rev": "2.5+",
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "ro": "0",
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "rotational": "1",
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "sas_address": "",
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "sas_device_handle": "",
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "scheduler_mode": "mq-deadline",
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "sectors": 0,
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "sectorsize": "2048",
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "size": 493568.0,
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "support_discard": "2048",
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "type": "disk",
Jan 29 09:43:52 compute-0 sweet_colden[254495]:             "vendor": "QEMU"
Jan 29 09:43:52 compute-0 sweet_colden[254495]:         }
Jan 29 09:43:52 compute-0 sweet_colden[254495]:     }
Jan 29 09:43:52 compute-0 sweet_colden[254495]: ]
Jan 29 09:43:52 compute-0 systemd[1]: libpod-e8878f52ab6dc0a31e90e04eb2e2a4124e78ce3698e4648f052a9a7767027ba2.scope: Deactivated successfully.
Jan 29 09:43:52 compute-0 podman[254479]: 2026-01-29 09:43:52.949037884 +0000 UTC m=+0.592296652 container died e8878f52ab6dc0a31e90e04eb2e2a4124e78ce3698e4648f052a9a7767027ba2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 29 09:43:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f37301b7c5ba1144842788ba03b25bfd70c2bfe44febaf2113c6ca652a1bdd66-merged.mount: Deactivated successfully.
Jan 29 09:43:52 compute-0 podman[254479]: 2026-01-29 09:43:52.991517691 +0000 UTC m=+0.634776419 container remove e8878f52ab6dc0a31e90e04eb2e2a4124e78ce3698e4648f052a9a7767027ba2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_colden, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 29 09:43:53 compute-0 systemd[1]: libpod-conmon-e8878f52ab6dc0a31e90e04eb2e2a4124e78ce3698e4648f052a9a7767027ba2.scope: Deactivated successfully.
Jan 29 09:43:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1042: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:53 compute-0 sudo[254403]: pam_unix(sudo:session): session closed for user root
Jan 29 09:43:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:43:53 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:43:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:43:53 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:43:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:43:53 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:43:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 09:43:53 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:43:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 09:43:53 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:43:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 09:43:53 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:43:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 09:43:53 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:43:53 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:43:53 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:43:53 compute-0 sudo[255231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:43:53 compute-0 sudo[255231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:43:53 compute-0 sudo[255231]: pam_unix(sudo:session): session closed for user root
Jan 29 09:43:53 compute-0 sudo[255256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 29 09:43:53 compute-0 sudo[255256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:43:53 compute-0 podman[255293]: 2026-01-29 09:43:53.518460182 +0000 UTC m=+0.060411806 container create d71b5f6fc78ee03700bab7726b88def852f05bf0d3d633ce2631e79031897ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 09:43:53 compute-0 systemd[1]: Started libpod-conmon-d71b5f6fc78ee03700bab7726b88def852f05bf0d3d633ce2631e79031897ce7.scope.
Jan 29 09:43:53 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:43:53 compute-0 podman[255293]: 2026-01-29 09:43:53.492127745 +0000 UTC m=+0.034079429 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:43:53 compute-0 nova_compute[236255]: 2026-01-29 09:43:53.586 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:43:53 compute-0 podman[255293]: 2026-01-29 09:43:53.598252335 +0000 UTC m=+0.140203969 container init d71b5f6fc78ee03700bab7726b88def852f05bf0d3d633ce2631e79031897ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elion, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:43:53 compute-0 podman[255293]: 2026-01-29 09:43:53.606554992 +0000 UTC m=+0.148506626 container start d71b5f6fc78ee03700bab7726b88def852f05bf0d3d633ce2631e79031897ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elion, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:43:53 compute-0 podman[255293]: 2026-01-29 09:43:53.610581841 +0000 UTC m=+0.152533495 container attach d71b5f6fc78ee03700bab7726b88def852f05bf0d3d633ce2631e79031897ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 29 09:43:53 compute-0 youthful_elion[255309]: 167 167
Jan 29 09:43:53 compute-0 systemd[1]: libpod-d71b5f6fc78ee03700bab7726b88def852f05bf0d3d633ce2631e79031897ce7.scope: Deactivated successfully.
Jan 29 09:43:53 compute-0 podman[255293]: 2026-01-29 09:43:53.612226536 +0000 UTC m=+0.154178160 container died d71b5f6fc78ee03700bab7726b88def852f05bf0d3d633ce2631e79031897ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:43:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-426e7ceb64779dd06d1638c0ca76adb46fa2cb608dc43399d3fbbcaaf3fe0451-merged.mount: Deactivated successfully.
Jan 29 09:43:53 compute-0 podman[255293]: 2026-01-29 09:43:53.664187511 +0000 UTC m=+0.206139145 container remove d71b5f6fc78ee03700bab7726b88def852f05bf0d3d633ce2631e79031897ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 09:43:53 compute-0 systemd[1]: libpod-conmon-d71b5f6fc78ee03700bab7726b88def852f05bf0d3d633ce2631e79031897ce7.scope: Deactivated successfully.
Jan 29 09:43:53 compute-0 podman[255332]: 2026-01-29 09:43:53.854571957 +0000 UTC m=+0.061508877 container create 1e38e38fa8863c2a1d6ea3e5be75156e1fccad47557fbb2d2f303b0817689646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:43:53 compute-0 systemd[1]: Started libpod-conmon-1e38e38fa8863c2a1d6ea3e5be75156e1fccad47557fbb2d2f303b0817689646.scope.
Jan 29 09:43:53 compute-0 podman[255332]: 2026-01-29 09:43:53.829458573 +0000 UTC m=+0.036395553 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:43:53 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a3416113639570f426a0a6cf6eeadcefefdd6967172fc05062f3510f0bc6283/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a3416113639570f426a0a6cf6eeadcefefdd6967172fc05062f3510f0bc6283/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a3416113639570f426a0a6cf6eeadcefefdd6967172fc05062f3510f0bc6283/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a3416113639570f426a0a6cf6eeadcefefdd6967172fc05062f3510f0bc6283/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a3416113639570f426a0a6cf6eeadcefefdd6967172fc05062f3510f0bc6283/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 09:43:53 compute-0 podman[255332]: 2026-01-29 09:43:53.955236798 +0000 UTC m=+0.162173738 container init 1e38e38fa8863c2a1d6ea3e5be75156e1fccad47557fbb2d2f303b0817689646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_satoshi, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:43:53 compute-0 podman[255332]: 2026-01-29 09:43:53.967533273 +0000 UTC m=+0.174470193 container start 1e38e38fa8863c2a1d6ea3e5be75156e1fccad47557fbb2d2f303b0817689646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:43:53 compute-0 podman[255332]: 2026-01-29 09:43:53.971549433 +0000 UTC m=+0.178486363 container attach 1e38e38fa8863c2a1d6ea3e5be75156e1fccad47557fbb2d2f303b0817689646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_satoshi, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 09:43:54 compute-0 ceph-mon[75183]: pgmap v1042: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:54 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:43:54 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:43:54 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:43:54 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 09:43:54 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:43:54 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 09:43:54 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 09:43:54 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:43:54 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:43:54 compute-0 frosty_satoshi[255348]: --> passed data devices: 0 physical, 3 LVM
Jan 29 09:43:54 compute-0 frosty_satoshi[255348]: --> All data devices are unavailable
Jan 29 09:43:54 compute-0 systemd[1]: libpod-1e38e38fa8863c2a1d6ea3e5be75156e1fccad47557fbb2d2f303b0817689646.scope: Deactivated successfully.
Jan 29 09:43:54 compute-0 podman[255332]: 2026-01-29 09:43:54.456670425 +0000 UTC m=+0.663607305 container died 1e38e38fa8863c2a1d6ea3e5be75156e1fccad47557fbb2d2f303b0817689646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:43:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a3416113639570f426a0a6cf6eeadcefefdd6967172fc05062f3510f0bc6283-merged.mount: Deactivated successfully.
Jan 29 09:43:54 compute-0 podman[255332]: 2026-01-29 09:43:54.513558045 +0000 UTC m=+0.720494935 container remove 1e38e38fa8863c2a1d6ea3e5be75156e1fccad47557fbb2d2f303b0817689646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:43:54 compute-0 systemd[1]: libpod-conmon-1e38e38fa8863c2a1d6ea3e5be75156e1fccad47557fbb2d2f303b0817689646.scope: Deactivated successfully.
Jan 29 09:43:54 compute-0 sudo[255256]: pam_unix(sudo:session): session closed for user root
Jan 29 09:43:54 compute-0 sudo[255382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:43:54 compute-0 sudo[255382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:43:54 compute-0 sudo[255382]: pam_unix(sudo:session): session closed for user root
Jan 29 09:43:54 compute-0 sudo[255407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- lvm list --format json
Jan 29 09:43:54 compute-0 sudo[255407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:43:54 compute-0 podman[255445]: 2026-01-29 09:43:54.928093185 +0000 UTC m=+0.043096585 container create d90319b9dd5cf0d470fd6dd9a3bbbdca688055cea25a29def01b55797ccbb7d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wing, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 09:43:54 compute-0 systemd[1]: Started libpod-conmon-d90319b9dd5cf0d470fd6dd9a3bbbdca688055cea25a29def01b55797ccbb7d5.scope.
Jan 29 09:43:54 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:43:55 compute-0 podman[255445]: 2026-01-29 09:43:54.910333651 +0000 UTC m=+0.025337071 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:43:55 compute-0 podman[255445]: 2026-01-29 09:43:55.010910681 +0000 UTC m=+0.125914101 container init d90319b9dd5cf0d470fd6dd9a3bbbdca688055cea25a29def01b55797ccbb7d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wing, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:43:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1043: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:55 compute-0 podman[255445]: 2026-01-29 09:43:55.019945657 +0000 UTC m=+0.134949037 container start d90319b9dd5cf0d470fd6dd9a3bbbdca688055cea25a29def01b55797ccbb7d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wing, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:43:55 compute-0 musing_wing[255461]: 167 167
Jan 29 09:43:55 compute-0 systemd[1]: libpod-d90319b9dd5cf0d470fd6dd9a3bbbdca688055cea25a29def01b55797ccbb7d5.scope: Deactivated successfully.
Jan 29 09:43:55 compute-0 podman[255445]: 2026-01-29 09:43:55.024725747 +0000 UTC m=+0.139729147 container attach d90319b9dd5cf0d470fd6dd9a3bbbdca688055cea25a29def01b55797ccbb7d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 29 09:43:55 compute-0 podman[255445]: 2026-01-29 09:43:55.02519937 +0000 UTC m=+0.140202800 container died d90319b9dd5cf0d470fd6dd9a3bbbdca688055cea25a29def01b55797ccbb7d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wing, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:43:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-55d0b00a51276d817a5a9dc99b414f791509d61cd53b872f71bd1098759d2ccc-merged.mount: Deactivated successfully.
Jan 29 09:43:55 compute-0 podman[255445]: 2026-01-29 09:43:55.070576156 +0000 UTC m=+0.185579556 container remove d90319b9dd5cf0d470fd6dd9a3bbbdca688055cea25a29def01b55797ccbb7d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:43:55 compute-0 systemd[1]: libpod-conmon-d90319b9dd5cf0d470fd6dd9a3bbbdca688055cea25a29def01b55797ccbb7d5.scope: Deactivated successfully.
Jan 29 09:43:55 compute-0 podman[255486]: 2026-01-29 09:43:55.260217141 +0000 UTC m=+0.061896057 container create 7e26d7d92297e235b7e35ca193255f03ca92259fa11263d306d34fc7e741158a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kepler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:43:55 compute-0 systemd[1]: Started libpod-conmon-7e26d7d92297e235b7e35ca193255f03ca92259fa11263d306d34fc7e741158a.scope.
Jan 29 09:43:55 compute-0 podman[255486]: 2026-01-29 09:43:55.235610341 +0000 UTC m=+0.037289277 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:43:55 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:43:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64378078a4b0484276a6df94925419317587fb0db797741ad18b768e80a85e2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:43:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64378078a4b0484276a6df94925419317587fb0db797741ad18b768e80a85e2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:43:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64378078a4b0484276a6df94925419317587fb0db797741ad18b768e80a85e2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:43:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64378078a4b0484276a6df94925419317587fb0db797741ad18b768e80a85e2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:43:55 compute-0 podman[255486]: 2026-01-29 09:43:55.366714551 +0000 UTC m=+0.168393487 container init 7e26d7d92297e235b7e35ca193255f03ca92259fa11263d306d34fc7e741158a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kepler, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:43:55 compute-0 podman[255486]: 2026-01-29 09:43:55.373534127 +0000 UTC m=+0.175213003 container start 7e26d7d92297e235b7e35ca193255f03ca92259fa11263d306d34fc7e741158a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kepler, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:43:55 compute-0 podman[255486]: 2026-01-29 09:43:55.377489925 +0000 UTC m=+0.179168901 container attach 7e26d7d92297e235b7e35ca193255f03ca92259fa11263d306d34fc7e741158a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 29 09:43:55 compute-0 exciting_kepler[255501]: {
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:     "0": [
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:         {
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "devices": [
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "/dev/loop3"
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             ],
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "lv_name": "ceph_lv0",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "lv_size": "21470642176",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f2ccc677-8576-4fc5-9e3f-60956ceb21b0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "lv_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "name": "ceph_lv0",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "tags": {
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.block_uuid": "GmQffg-jK4i-oPD9-d4im-0z1o-0cDX-fqE4CL",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.cluster_name": "ceph",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.crush_device_class": "",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.encrypted": "0",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.objectstore": "bluestore",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.osd_fsid": "f2ccc677-8576-4fc5-9e3f-60956ceb21b0",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.osd_id": "0",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.type": "block",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.vdo": "0",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.with_tpm": "0"
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             },
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "type": "block",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "vg_name": "ceph_vg0"
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:         }
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:     ],
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:     "1": [
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:         {
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "devices": [
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "/dev/loop4"
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             ],
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "lv_name": "ceph_lv1",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "lv_size": "21470642176",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=173bb34b-fc2f-4ae0-a6ce-59e1b64aaace,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "lv_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "name": "ceph_lv1",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "tags": {
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.block_uuid": "YodIgh-0HEI-SI1w-CwZ0-hqoS-IgtX-RDB0yF",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.cluster_name": "ceph",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.crush_device_class": "",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.encrypted": "0",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.objectstore": "bluestore",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.osd_fsid": "173bb34b-fc2f-4ae0-a6ce-59e1b64aaace",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.osd_id": "1",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.type": "block",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.vdo": "0",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.with_tpm": "0"
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             },
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "type": "block",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "vg_name": "ceph_vg1"
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:         }
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:     ],
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:     "2": [
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:         {
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "devices": [
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "/dev/loop5"
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             ],
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "lv_name": "ceph_lv2",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "lv_size": "21470642176",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3fdce3ca-565d-5459-88e8-1ffe58b48437,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9a078cd8-4bd4-40a2-98a3-c1163db42997,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "lv_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "name": "ceph_lv2",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "tags": {
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.block_uuid": "tchNBz-D0L2-wtdA-20AP-hlBQ-CeTP-7ohi2G",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.cephx_lockbox_secret": "",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.cluster_fsid": "3fdce3ca-565d-5459-88e8-1ffe58b48437",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.cluster_name": "ceph",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.crush_device_class": "",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.encrypted": "0",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.objectstore": "bluestore",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.osd_fsid": "9a078cd8-4bd4-40a2-98a3-c1163db42997",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.osd_id": "2",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.type": "block",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.vdo": "0",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:                 "ceph.with_tpm": "0"
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             },
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "type": "block",
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:             "vg_name": "ceph_vg2"
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:         }
Jan 29 09:43:55 compute-0 exciting_kepler[255501]:     ]
Jan 29 09:43:55 compute-0 exciting_kepler[255501]: }
Jan 29 09:43:55 compute-0 systemd[1]: libpod-7e26d7d92297e235b7e35ca193255f03ca92259fa11263d306d34fc7e741158a.scope: Deactivated successfully.
Jan 29 09:43:55 compute-0 podman[255486]: 2026-01-29 09:43:55.694110108 +0000 UTC m=+0.495789034 container died 7e26d7d92297e235b7e35ca193255f03ca92259fa11263d306d34fc7e741158a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kepler, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 09:43:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-64378078a4b0484276a6df94925419317587fb0db797741ad18b768e80a85e2e-merged.mount: Deactivated successfully.
Jan 29 09:43:55 compute-0 podman[255486]: 2026-01-29 09:43:55.745969471 +0000 UTC m=+0.547648397 container remove 7e26d7d92297e235b7e35ca193255f03ca92259fa11263d306d34fc7e741158a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kepler, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 09:43:55 compute-0 systemd[1]: libpod-conmon-7e26d7d92297e235b7e35ca193255f03ca92259fa11263d306d34fc7e741158a.scope: Deactivated successfully.
Jan 29 09:43:55 compute-0 sudo[255407]: pam_unix(sudo:session): session closed for user root
Jan 29 09:43:55 compute-0 sudo[255522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 29 09:43:55 compute-0 sudo[255522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:43:55 compute-0 sudo[255522]: pam_unix(sudo:session): session closed for user root
Jan 29 09:43:55 compute-0 sudo[255547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/3fdce3ca-565d-5459-88e8-1ffe58b48437/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437 -- raw list --format json
Jan 29 09:43:55 compute-0 sudo[255547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:43:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2026-01-29_09:43:56
Jan 29 09:43:56 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 09:43:56 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Jan 29 09:43:56 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'volumes', 'vms', 'images', 'backups', 'cephfs.cephfs.data']
Jan 29 09:43:56 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 09:43:56 compute-0 ceph-mon[75183]: pgmap v1043: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:56 compute-0 podman[255583]: 2026-01-29 09:43:56.224752751 +0000 UTC m=+0.044202685 container create e852a0f579fdc304f6b7a96b4a14d9cf34bcdb228f2018ea916308df47dcbf49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:43:56 compute-0 systemd[1]: Started libpod-conmon-e852a0f579fdc304f6b7a96b4a14d9cf34bcdb228f2018ea916308df47dcbf49.scope.
Jan 29 09:43:56 compute-0 podman[255583]: 2026-01-29 09:43:56.204964592 +0000 UTC m=+0.024414496 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:43:56 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:43:56 compute-0 podman[255583]: 2026-01-29 09:43:56.319330497 +0000 UTC m=+0.138780471 container init e852a0f579fdc304f6b7a96b4a14d9cf34bcdb228f2018ea916308df47dcbf49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tharp, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 09:43:56 compute-0 podman[255583]: 2026-01-29 09:43:56.327600522 +0000 UTC m=+0.147050446 container start e852a0f579fdc304f6b7a96b4a14d9cf34bcdb228f2018ea916308df47dcbf49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tharp, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 09:43:56 compute-0 cool_tharp[255600]: 167 167
Jan 29 09:43:56 compute-0 systemd[1]: libpod-e852a0f579fdc304f6b7a96b4a14d9cf34bcdb228f2018ea916308df47dcbf49.scope: Deactivated successfully.
Jan 29 09:43:56 compute-0 podman[255583]: 2026-01-29 09:43:56.332353512 +0000 UTC m=+0.151803426 container attach e852a0f579fdc304f6b7a96b4a14d9cf34bcdb228f2018ea916308df47dcbf49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tharp, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 29 09:43:56 compute-0 podman[255583]: 2026-01-29 09:43:56.332904057 +0000 UTC m=+0.152353951 container died e852a0f579fdc304f6b7a96b4a14d9cf34bcdb228f2018ea916308df47dcbf49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tharp, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 29 09:43:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e970b60783623863f8e8046307dbdfff2a8086f968a799322b4dfa58e9aad3d-merged.mount: Deactivated successfully.
Jan 29 09:43:56 compute-0 podman[255583]: 2026-01-29 09:43:56.374421317 +0000 UTC m=+0.193871251 container remove e852a0f579fdc304f6b7a96b4a14d9cf34bcdb228f2018ea916308df47dcbf49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tharp, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 09:43:56 compute-0 systemd[1]: libpod-conmon-e852a0f579fdc304f6b7a96b4a14d9cf34bcdb228f2018ea916308df47dcbf49.scope: Deactivated successfully.
Jan 29 09:43:56 compute-0 podman[255625]: 2026-01-29 09:43:56.548326304 +0000 UTC m=+0.054000672 container create 7884ca18d460eea014fab848e129a32a6ad6d75bd99368e5c491b16c40d84833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 29 09:43:56 compute-0 nova_compute[236255]: 2026-01-29 09:43:56.554 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:43:56 compute-0 nova_compute[236255]: 2026-01-29 09:43:56.556 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 29 09:43:56 compute-0 nova_compute[236255]: 2026-01-29 09:43:56.556 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 29 09:43:56 compute-0 nova_compute[236255]: 2026-01-29 09:43:56.572 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 29 09:43:56 compute-0 nova_compute[236255]: 2026-01-29 09:43:56.573 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:43:56 compute-0 nova_compute[236255]: 2026-01-29 09:43:56.574 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:43:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:43:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:43:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:43:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:43:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:43:56 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:43:56 compute-0 nova_compute[236255]: 2026-01-29 09:43:56.596 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:43:56 compute-0 nova_compute[236255]: 2026-01-29 09:43:56.597 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:43:56 compute-0 nova_compute[236255]: 2026-01-29 09:43:56.597 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:43:56 compute-0 nova_compute[236255]: 2026-01-29 09:43:56.598 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 29 09:43:56 compute-0 nova_compute[236255]: 2026-01-29 09:43:56.598 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:43:56 compute-0 systemd[1]: Started libpod-conmon-7884ca18d460eea014fab848e129a32a6ad6d75bd99368e5c491b16c40d84833.scope.
Jan 29 09:43:56 compute-0 podman[255625]: 2026-01-29 09:43:56.526089948 +0000 UTC m=+0.031764316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 09:43:56 compute-0 systemd[1]: Started libcrun container.
Jan 29 09:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2170936af35a3801f62c2ed43d3548b653893e7b46fd8b0f97677274b86bc4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 09:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2170936af35a3801f62c2ed43d3548b653893e7b46fd8b0f97677274b86bc4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 09:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2170936af35a3801f62c2ed43d3548b653893e7b46fd8b0f97677274b86bc4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 09:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2170936af35a3801f62c2ed43d3548b653893e7b46fd8b0f97677274b86bc4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 09:43:56 compute-0 podman[255625]: 2026-01-29 09:43:56.653032846 +0000 UTC m=+0.158707204 container init 7884ca18d460eea014fab848e129a32a6ad6d75bd99368e5c491b16c40d84833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_williamson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 29 09:43:56 compute-0 podman[255625]: 2026-01-29 09:43:56.662926805 +0000 UTC m=+0.168601123 container start 7884ca18d460eea014fab848e129a32a6ad6d75bd99368e5c491b16c40d84833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 09:43:56 compute-0 podman[255625]: 2026-01-29 09:43:56.667660474 +0000 UTC m=+0.173334832 container attach 7884ca18d460eea014fab848e129a32a6ad6d75bd99368e5c491b16c40d84833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_williamson, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:43:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 09:43:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 09:43:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:43:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 09:43:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:43:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 09:43:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:43:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 09:43:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:43:56 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 09:43:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1044: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:43:57 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2779998589' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:43:57 compute-0 nova_compute[236255]: 2026-01-29 09:43:57.166 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:43:57 compute-0 nova_compute[236255]: 2026-01-29 09:43:57.314 236262 WARNING nova.virt.libvirt.driver [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 29 09:43:57 compute-0 nova_compute[236255]: 2026-01-29 09:43:57.315 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5014MB free_disk=59.98826471157372GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 29 09:43:57 compute-0 nova_compute[236255]: 2026-01-29 09:43:57.316 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:43:57 compute-0 nova_compute[236255]: 2026-01-29 09:43:57.316 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:43:57 compute-0 lvm[255741]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:43:57 compute-0 lvm[255741]: VG ceph_vg0 finished
Jan 29 09:43:57 compute-0 lvm[255742]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:43:57 compute-0 lvm[255742]: VG ceph_vg1 finished
Jan 29 09:43:57 compute-0 lvm[255744]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:43:57 compute-0 lvm[255744]: VG ceph_vg2 finished
Jan 29 09:43:57 compute-0 pensive_williamson[255641]: {}
Jan 29 09:43:57 compute-0 systemd[1]: libpod-7884ca18d460eea014fab848e129a32a6ad6d75bd99368e5c491b16c40d84833.scope: Deactivated successfully.
Jan 29 09:43:57 compute-0 systemd[1]: libpod-7884ca18d460eea014fab848e129a32a6ad6d75bd99368e5c491b16c40d84833.scope: Consumed 1.191s CPU time.
Jan 29 09:43:57 compute-0 podman[255625]: 2026-01-29 09:43:57.519918495 +0000 UTC m=+1.025592823 container died 7884ca18d460eea014fab848e129a32a6ad6d75bd99368e5c491b16c40d84833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_williamson, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 09:43:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e2170936af35a3801f62c2ed43d3548b653893e7b46fd8b0f97677274b86bc4-merged.mount: Deactivated successfully.
Jan 29 09:43:57 compute-0 nova_compute[236255]: 2026-01-29 09:43:57.558 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 29 09:43:57 compute-0 nova_compute[236255]: 2026-01-29 09:43:57.558 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 29 09:43:57 compute-0 podman[255625]: 2026-01-29 09:43:57.566281018 +0000 UTC m=+1.071955346 container remove 7884ca18d460eea014fab848e129a32a6ad6d75bd99368e5c491b16c40d84833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_williamson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 09:43:57 compute-0 systemd[1]: libpod-conmon-7884ca18d460eea014fab848e129a32a6ad6d75bd99368e5c491b16c40d84833.scope: Deactivated successfully.
Jan 29 09:43:57 compute-0 sudo[255547]: pam_unix(sudo:session): session closed for user root
Jan 29 09:43:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 09:43:57 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:43:57 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 09:43:57 compute-0 ceph-mon[75183]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:43:57 compute-0 nova_compute[236255]: 2026-01-29 09:43:57.645 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Refreshing inventories for resource provider 2689825d-8fa0-473a-adf1-5005faba9bec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 29 09:43:57 compute-0 sudo[255758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 29 09:43:57 compute-0 sudo[255758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 29 09:43:57 compute-0 sudo[255758]: pam_unix(sudo:session): session closed for user root
Jan 29 09:43:57 compute-0 nova_compute[236255]: 2026-01-29 09:43:57.726 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Updating ProviderTree inventory for provider 2689825d-8fa0-473a-adf1-5005faba9bec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 29 09:43:57 compute-0 nova_compute[236255]: 2026-01-29 09:43:57.726 236262 DEBUG nova.compute.provider_tree [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Updating inventory in ProviderTree for provider 2689825d-8fa0-473a-adf1-5005faba9bec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 29 09:43:57 compute-0 nova_compute[236255]: 2026-01-29 09:43:57.740 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Refreshing aggregate associations for resource provider 2689825d-8fa0-473a-adf1-5005faba9bec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 29 09:43:57 compute-0 nova_compute[236255]: 2026-01-29 09:43:57.773 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Refreshing trait associations for resource provider 2689825d-8fa0-473a-adf1-5005faba9bec, traits: HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 29 09:43:57 compute-0 nova_compute[236255]: 2026-01-29 09:43:57.788 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 29 09:43:58 compute-0 ceph-mon[75183]: pgmap v1044: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:58 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2779998589' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:43:58 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:43:58 compute-0 ceph-mon[75183]: from='mgr.14122 192.168.122.100:0/3842934292' entity='mgr.compute-0.ucpkkb' 
Jan 29 09:43:58 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 09:43:58 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1802739595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:43:58 compute-0 nova_compute[236255]: 2026-01-29 09:43:58.296 236262 DEBUG oslo_concurrency.processutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 29 09:43:58 compute-0 nova_compute[236255]: 2026-01-29 09:43:58.306 236262 DEBUG nova.compute.provider_tree [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed in ProviderTree for provider: 2689825d-8fa0-473a-adf1-5005faba9bec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 29 09:43:58 compute-0 nova_compute[236255]: 2026-01-29 09:43:58.321 236262 DEBUG nova.scheduler.client.report [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Inventory has not changed for provider 2689825d-8fa0-473a-adf1-5005faba9bec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 29 09:43:58 compute-0 nova_compute[236255]: 2026-01-29 09:43:58.323 236262 DEBUG nova.compute.resource_tracker [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 29 09:43:58 compute-0 nova_compute[236255]: 2026-01-29 09:43:58.324 236262 DEBUG oslo_concurrency.lockutils [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:43:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1045: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:43:59 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1802739595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 09:43:59 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:43:59 compute-0 nova_compute[236255]: 2026-01-29 09:43:59.306 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:44:00 compute-0 ceph-mon[75183]: pgmap v1045: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:00 compute-0 nova_compute[236255]: 2026-01-29 09:44:00.556 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:44:00 compute-0 nova_compute[236255]: 2026-01-29 09:44:00.557 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 29 09:44:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1046: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 09:44:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:44:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 09:44:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:44:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.756942845403104e-07 of space, bias 1.0, pg target 8.270828536209312e-05 quantized to 32 (current 32)
Jan 29 09:44:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:44:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.5000327611377854e-07 of space, bias 1.0, pg target 4.500098283413356e-05 quantized to 32 (current 32)
Jan 29 09:44:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:44:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:44:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:44:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006683123502180493 of space, bias 1.0, pg target 0.2004937050654148 quantized to 32 (current 32)
Jan 29 09:44:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:44:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2051960589356773e-06 of space, bias 4.0, pg target 0.0014462352707228128 quantized to 16 (current 32)
Jan 29 09:44:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 09:44:01 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 09:44:02 compute-0 ceph-mon[75183]: pgmap v1046: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1047: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:04 compute-0 ceph-mon[75183]: pgmap v1047: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:04 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:44:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1048: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:05 compute-0 podman[255805]: 2026-01-29 09:44:05.178186695 +0000 UTC m=+0.114895290 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 29 09:44:05 compute-0 podman[255831]: 2026-01-29 09:44:05.237172191 +0000 UTC m=+0.052721006 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 29 09:44:06 compute-0 ceph-mon[75183]: pgmap v1048: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1049: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:08 compute-0 ceph-mon[75183]: pgmap v1049: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1050: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:44:09.050 152476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 29 09:44:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:44:09.051 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 29 09:44:09 compute-0 ovn_metadata_agent[152471]: 2026-01-29 09:44:09.051 152476 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 29 09:44:09 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:44:10 compute-0 ceph-mon[75183]: pgmap v1050: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1051: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:12 compute-0 ceph-mon[75183]: pgmap v1051: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1052: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:14 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:44:14 compute-0 ceph-mon[75183]: pgmap v1052: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1053: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:16 compute-0 ceph-mon[75183]: pgmap v1053: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1054: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:18 compute-0 ceph-mon[75183]: pgmap v1054: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1055: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:19 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:44:20 compute-0 ceph-mon[75183]: pgmap v1055: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1056: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:22 compute-0 ceph-mon[75183]: pgmap v1056: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1057: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:24 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:44:24 compute-0 ceph-mon[75183]: pgmap v1057: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:24 compute-0 sshd-session[255853]: Accepted publickey for zuul from 192.168.122.10 port 44346 ssh2: ECDSA SHA256:qqjbOYpZTlxu87GVWkN9FUN5axzsqhdCHa4izMGxX18
Jan 29 09:44:24 compute-0 systemd-logind[799]: New session 54 of user zuul.
Jan 29 09:44:24 compute-0 systemd[1]: Started Session 54 of User zuul.
Jan 29 09:44:24 compute-0 sshd-session[255853]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 29 09:44:24 compute-0 sudo[255857]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 29 09:44:24 compute-0 sudo[255857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 29 09:44:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1058: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:26 compute-0 ceph-mon[75183]: pgmap v1058: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:26 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15010 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:44:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:44:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:44:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:44:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 09:44:26 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 09:44:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1059: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:27 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15012 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:27 compute-0 ceph-mon[75183]: from='client.15010 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:27 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 29 09:44:27 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3835393088' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 29 09:44:28 compute-0 ceph-mon[75183]: pgmap v1059: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:28 compute-0 ceph-mon[75183]: from='client.15012 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:28 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3835393088' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 29 09:44:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1060: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:29 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:44:30 compute-0 ovs-vsctl[256118]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 29 09:44:30 compute-0 ceph-mon[75183]: pgmap v1060: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:30 compute-0 virtqemud[236585]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 29 09:44:30 compute-0 virtqemud[236585]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 29 09:44:30 compute-0 virtqemud[236585]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 29 09:44:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1061: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:31 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: cache status {prefix=cache status} (starting...)
Jan 29 09:44:31 compute-0 lvm[256439]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 09:44:31 compute-0 lvm[256439]: VG ceph_vg1 finished
Jan 29 09:44:31 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: client ls {prefix=client ls} (starting...)
Jan 29 09:44:31 compute-0 lvm[256467]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 09:44:31 compute-0 lvm[256467]: VG ceph_vg0 finished
Jan 29 09:44:31 compute-0 lvm[256473]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 09:44:31 compute-0 lvm[256473]: VG ceph_vg2 finished
Jan 29 09:44:31 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15016 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:31 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: damage ls {prefix=damage ls} (starting...)
Jan 29 09:44:32 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: dump loads {prefix=dump loads} (starting...)
Jan 29 09:44:32 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15018 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:32 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 29 09:44:32 compute-0 ceph-mon[75183]: pgmap v1061: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:32 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 29 09:44:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 29 09:44:32 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2151488951' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 29 09:44:32 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 29 09:44:32 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15022 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:32 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 29 09:44:32 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 29 09:44:32 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 09:44:32 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3287259299' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:44:32 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 29 09:44:33 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15026 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:33 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 29 09:44:33 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-ucpkkb[75469]: 2026-01-29T09:44:33.013+0000 7f5f5ebc1640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 29 09:44:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1062: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:33 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: ops {prefix=ops} (starting...)
Jan 29 09:44:33 compute-0 ceph-mon[75183]: from='client.15016 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:33 compute-0 ceph-mon[75183]: from='client.15018 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:33 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2151488951' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 29 09:44:33 compute-0 ceph-mon[75183]: from='client.15022 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:33 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3287259299' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 09:44:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 29 09:44:33 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1331154436' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 29 09:44:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 29 09:44:33 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3241379774' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 29 09:44:33 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: session ls {prefix=session ls} (starting...)
Jan 29 09:44:33 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 29 09:44:33 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4010207975' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 29 09:44:33 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.eawrqy asok_command: status {prefix=status} (starting...)
Jan 29 09:44:34 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 29 09:44:34 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2017822406' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 29 09:44:34 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:44:34 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15036 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:34 compute-0 ceph-mon[75183]: from='client.15026 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:34 compute-0 ceph-mon[75183]: pgmap v1062: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:34 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1331154436' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 29 09:44:34 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3241379774' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 29 09:44:34 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/4010207975' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 29 09:44:34 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2017822406' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 29 09:44:34 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 29 09:44:34 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/120168968' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 29 09:44:34 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15040 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1063: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:35 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 29 09:44:35 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/307788048' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 29 09:44:35 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 29 09:44:35 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/807406431' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 29 09:44:35 compute-0 ceph-mon[75183]: from='client.15036 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:35 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/120168968' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 29 09:44:35 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/307788048' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 29 09:44:35 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/807406431' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 29 09:44:35 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 29 09:44:35 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2601698260' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 29 09:44:35 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 29 09:44:35 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3820685677' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 29 09:44:36 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 29 09:44:36 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3068745541' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 29 09:44:36 compute-0 podman[257040]: 2026-01-29 09:44:36.145151707 +0000 UTC m=+0.083526626 container health_status a7a2c5b288989014be92dea036133cacbea4360703c2b8b642f24d0125b15cf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 29 09:44:36 compute-0 podman[257039]: 2026-01-29 09:44:36.145797655 +0000 UTC m=+0.084703158 container health_status 7fcf78322797dee92752e9228301405af830fa315ae702698417ad706b871cbc (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1f3e524608ae431f2d3770ec925d27ae4e30878f0bc7e8d2ca0dfa6a0fc848da-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2-b6e9e5500c18f440b45bdba689e9b7fc329253bc2e03351b9fb5d0063b08c7f2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Jan 29 09:44:36 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15052 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:36 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-ucpkkb[75469]: 2026-01-29T09:44:36.285+0000 7f5f5ebc1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 29 09:44:36 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 29 09:44:36 compute-0 ceph-mon[75183]: from='client.15040 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:36 compute-0 ceph-mon[75183]: pgmap v1063: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:36 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2601698260' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 29 09:44:36 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3820685677' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 29 09:44:36 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3068745541' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 29 09:44:36 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 29 09:44:36 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3359693719' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 29 09:44:36 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 29 09:44:36 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/294022747' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 29 09:44:36 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15058 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.057335 7 0.000055
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.056489 7 0.000066
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.059647 7 0.000058
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.057129 7 0.000058
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.055936 7 0.000138
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000259 1 0.000011
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.056342 7 0.000067
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.055590 7 0.000037
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.056765 7 0.000076
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.056750 7 0.000056
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000391 1 0.000032
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000430 1 0.000013
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000486 1 0.000115
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000586 1 0.000088
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.13( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.017030 1 0.000039
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000571 1 0.000016
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.13( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.017267 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.13( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.059441 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000615 1 0.000040
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000647 1 0.000014
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000658 1 0.000016
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000828 1 0.000068
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000920 1 0.000013
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000966 1 0.000012
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001053 1 0.000021
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001092 1 0.000015
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001129 1 0.000065
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001208 1 0.000055
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001225 1 0.000030
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001277 1 0.000018
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001324 1 0.000047
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001428 1 0.000288
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000858 1 0.000777
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.020250 1 0.000041
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.020513 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.062572 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.14( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.026636 1 0.000071
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.14( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.026953 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.14( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.069191 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.15( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.033578 1 0.000027
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.15( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.034033 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.15( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.076197 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:37.624083+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 5 sent 3 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:06.682923+0000 osd.2 (osd.2) 4 : cluster [DBG] 5.1c scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:06.693392+0000 osd.2 (osd.2) 5 : cluster [DBG] 5.1c scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.16( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.030762 1 0.000056
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.16( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.030857 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.16( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.084062 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.8( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.037905 1 0.000060
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.8( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.038245 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.8( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.088787 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.3( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.045110 1 0.000056
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.3( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.045482 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.3( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.096392 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.5( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.052517 1 0.000026
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.5( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.052922 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.5( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.103803 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.2( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.060241 1 0.000039
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.2( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.060676 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.2( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.111522 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.067308 1 0.000056
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.067726 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.120527 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.b( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.074515 1 0.000054
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.b( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.075112 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.b( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.125468 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1f( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.082034 1 0.000114
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1f( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.082684 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1f( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.132352 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1c( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.088988 1 0.000057
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1c( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.089647 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1c( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.140043 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.4( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.096793 1 0.000058
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.4( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.097497 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.4( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.148826 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1d( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.103745 1 0.000041
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1d( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.104495 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1d( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.154885 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.111220 1 0.000030
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.112048 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.165696 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1e( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.118627 1 0.000051
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1e( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.119488 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1e( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.173340 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.2( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.126623 1 0.000045
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.2( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.127629 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.2( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.179171 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.19( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.133089 1 0.000039
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.19( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.134046 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.19( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.187813 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.7( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.140308 1 0.000022
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.7( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.141452 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.7( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.193082 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1b( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.143395 1 0.000020
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1b( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.143536 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.1b( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.203013 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.12( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.150548 1 0.000026
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.12( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.150744 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.12( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.209601 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.11( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.157751 1 0.000032
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.11( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.158181 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.11( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.217152 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.a( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.165143 1 0.000030
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.a( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.165608 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.a( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.221832 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.9( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.172559 1 0.000036
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.9( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.173197 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.9( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.231585 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.d( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.179718 1 0.000019
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.d( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.180328 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.d( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.238643 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.16( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.187371 1 0.000029
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.16( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.188043 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.16( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.246683 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.194811 1 0.000019
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.195491 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.252204 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.4( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.201724 1 0.000028
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.4( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.202422 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.4( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.259341 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.13( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.209482 1 0.000027
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.13( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.210005 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.13( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.268849 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.17( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.216558 1 0.000053
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.17( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.217447 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.17( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.276674 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1a( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.223598 1 0.000038
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1a( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.224552 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1a( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.280668 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.232278 1 0.000875
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.232573 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.291497 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62078976 unmapped: 827392 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 357455 data_alloc: 218103808 data_used: 252
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.f( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.238257 1 0.000053
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.f( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.239262 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.f( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.295781 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.7( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.245763 1 0.000035
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.7( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.246862 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.7( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.304227 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.19( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.253086 1 0.000052
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.19( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.254224 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.19( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.310185 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.260371 1 0.000045
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.261550 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.318737 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1d( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.268040 1 0.000053
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1d( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.269299 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1d( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.328972 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.c( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.275526 1 0.000038
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.c( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.276805 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.c( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.333182 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.6( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.282513 1 0.000028
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.6( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.283827 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.6( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.340622 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.289842 1 0.000046
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.291212 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.1( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.347992 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.18( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.297008 1 0.000031
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.18( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.298472 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[5.18( empty lb MIN local-lis/les=41/42 n=0 ec=41/25 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.354082 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.304479 1 0.000018
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.305400 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 pg_epoch: 46 pg[2.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.362578 0 0.000000
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:38.624324+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 4 last_log 7 sent 5 num 4 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:07.912765+0000 osd.2 (osd.2) 6 : cluster [DBG] 5.1f scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:07.923254+0000 osd.2 (osd.2) 7 : cluster [DBG] 5.1f scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 5)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:06.682923+0000 osd.2 (osd.2) 4 : cluster [DBG] 5.1c scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:06.693392+0000 osd.2 (osd.2) 5 : cluster [DBG] 5.1c scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62070784 unmapped: 835584 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:39.624534+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 4 last_log 9 sent 7 num 4 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:08.751582+0000 osd.2 (osd.2) 8 : cluster [DBG] 5.10 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:08.762260+0000 osd.2 (osd.2) 9 : cluster [DBG] 5.10 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 7)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:07.912765+0000 osd.2 (osd.2) 6 : cluster [DBG] 5.1f scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:07.923254+0000 osd.2 (osd.2) 7 : cluster [DBG] 5.1f scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 9)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:08.751582+0000 osd.2 (osd.2) 8 : cluster [DBG] 5.10 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:08.762260+0000 osd.2 (osd.2) 9 : cluster [DBG] 5.10 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe14d000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62169088 unmapped: 737280 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:40.624757+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 696320 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:41.624943+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 11 sent 9 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:10.769268+0000 osd.2 (osd.2) 10 : cluster [DBG] 2.14 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:10.779802+0000 osd.2 (osd.2) 11 : cluster [DBG] 2.14 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 860160 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 11)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:10.769268+0000 osd.2 (osd.2) 10 : cluster [DBG] 2.14 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:10.779802+0000 osd.2 (osd.2) 11 : cluster [DBG] 2.14 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:42.625421+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 860160 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 350815 data_alloc: 218103808 data_used: 252
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:43.625668+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 13 sent 11 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:12.786691+0000 osd.2 (osd.2) 12 : cluster [DBG] 2.12 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:12.797191+0000 osd.2 (osd.2) 13 : cluster [DBG] 2.12 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62078976 unmapped: 827392 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 13)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:12.786691+0000 osd.2 (osd.2) 12 : cluster [DBG] 2.12 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:12.797191+0000 osd.2 (osd.2) 13 : cluster [DBG] 2.12 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:44.625883+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 811008 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:45.626055+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 811008 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:46.626471+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 811008 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:47.626684+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 811008 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 350815 data_alloc: 218103808 data_used: 252
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:48.626879+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.759116173s of 11.233061790s, submitted: 219
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62087168 unmapped: 819200 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:49.627028+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 15 sent 13 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:18.791570+0000 osd.2 (osd.2) 14 : cluster [DBG] 2.10 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:18.802105+0000 osd.2 (osd.2) 15 : cluster [DBG] 2.10 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62087168 unmapped: 819200 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 15)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:18.791570+0000 osd.2 (osd.2) 14 : cluster [DBG] 2.10 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:18.802105+0000 osd.2 (osd.2) 15 : cluster [DBG] 2.10 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:50.627420+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 17 sent 15 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:19.780294+0000 osd.2 (osd.2) 16 : cluster [DBG] 5.17 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:19.790771+0000 osd.2 (osd.2) 17 : cluster [DBG] 5.17 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 811008 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:51.627799+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 17)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:19.780294+0000 osd.2 (osd.2) 16 : cluster [DBG] 5.17 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:19.790771+0000 osd.2 (osd.2) 17 : cluster [DBG] 5.17 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 811008 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:52.629983+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 811008 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 355641 data_alloc: 218103808 data_used: 252
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:53.630377+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62111744 unmapped: 794624 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:54.630513+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62111744 unmapped: 794624 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:55.631210+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 19 sent 17 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:24.679091+0000 osd.2 (osd.2) 18 : cluster [DBG] 5.8 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:24.689759+0000 osd.2 (osd.2) 19 : cluster [DBG] 5.8 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62111744 unmapped: 794624 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 19)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:24.679091+0000 osd.2 (osd.2) 18 : cluster [DBG] 5.8 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:24.689759+0000 osd.2 (osd.2) 19 : cluster [DBG] 5.8 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:56.631695+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62119936 unmapped: 786432 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:57.632253+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 21 sent 19 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:26.761995+0000 osd.2 (osd.2) 20 : cluster [DBG] 2.e scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:26.772683+0000 osd.2 (osd.2) 21 : cluster [DBG] 2.e scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62152704 unmapped: 753664 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 362876 data_alloc: 218103808 data_used: 252
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 21)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:26.761995+0000 osd.2 (osd.2) 20 : cluster [DBG] 2.e scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:26.772683+0000 osd.2 (osd.2) 21 : cluster [DBG] 2.e scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:58.633060+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 23 sent 21 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:27.786806+0000 osd.2 (osd.2) 22 : cluster [DBG] 2.1a scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:27.797511+0000 osd.2 (osd.2) 23 : cluster [DBG] 2.1a scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62169088 unmapped: 737280 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 23)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:27.786806+0000 osd.2 (osd.2) 22 : cluster [DBG] 2.1a scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:27.797511+0000 osd.2 (osd.2) 23 : cluster [DBG] 2.1a scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:59.633597+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.918220520s of 11.026761055s, submitted: 10
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62169088 unmapped: 737280 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:00.634258+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 25 sent 23 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:29.818470+0000 osd.2 (osd.2) 24 : cluster [DBG] 2.c scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:29.829105+0000 osd.2 (osd.2) 25 : cluster [DBG] 2.c scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62169088 unmapped: 737280 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 25)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:29.818470+0000 osd.2 (osd.2) 24 : cluster [DBG] 2.c scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:29.829105+0000 osd.2 (osd.2) 25 : cluster [DBG] 2.c scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:01.635087+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 720896 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:02.635701+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:31.766591+0000 osd.2 (osd.2) 26 : cluster [DBG] 5.a scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:31.777217+0000 osd.2 (osd.2) 27 : cluster [DBG] 5.a scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62201856 unmapped: 704512 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 370109 data_alloc: 218103808 data_used: 252
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 27)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:31.766591+0000 osd.2 (osd.2) 26 : cluster [DBG] 5.a scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:31.777217+0000 osd.2 (osd.2) 27 : cluster [DBG] 5.a scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:03.636232+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:32.742665+0000 osd.2 (osd.2) 28 : cluster [DBG] 5.b scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:32.753181+0000 osd.2 (osd.2) 29 : cluster [DBG] 5.b scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 688128 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 29)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:32.742665+0000 osd.2 (osd.2) 28 : cluster [DBG] 5.b scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:32.753181+0000 osd.2 (osd.2) 29 : cluster [DBG] 5.b scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:04.636781+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 688128 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:05.637044+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:34.733322+0000 osd.2 (osd.2) 30 : cluster [DBG] 5.0 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:34.743994+0000 osd.2 (osd.2) 31 : cluster [DBG] 5.0 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 31)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:34.733322+0000 osd.2 (osd.2) 30 : cluster [DBG] 5.0 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:34.743994+0000 osd.2 (osd.2) 31 : cluster [DBG] 5.0 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 679936 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:06.637479+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 679936 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:07.637620+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 679936 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 372520 data_alloc: 218103808 data_used: 252
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:08.637941+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 679936 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:09.638226+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:38.801236+0000 osd.2 (osd.2) 32 : cluster [DBG] 2.1 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:38.811787+0000 osd.2 (osd.2) 33 : cluster [DBG] 2.1 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 33)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:38.801236+0000 osd.2 (osd.2) 32 : cluster [DBG] 2.1 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:38.811787+0000 osd.2 (osd.2) 33 : cluster [DBG] 2.1 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62234624 unmapped: 671744 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:10.638591+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.792082787s of 10.928949356s, submitted: 10
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62242816 unmapped: 663552 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:11.638967+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:40.747450+0000 osd.2 (osd.2) 34 : cluster [DBG] 5.6 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:40.758058+0000 osd.2 (osd.2) 35 : cluster [DBG] 5.6 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 35)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:40.747450+0000 osd.2 (osd.2) 34 : cluster [DBG] 5.6 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:40.758058+0000 osd.2 (osd.2) 35 : cluster [DBG] 5.6 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62242816 unmapped: 663552 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:12.639401+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62242816 unmapped: 663552 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 379753 data_alloc: 218103808 data_used: 252
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:13.639688+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:42.760250+0000 osd.2 (osd.2) 36 : cluster [DBG] 5.e scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:42.770912+0000 osd.2 (osd.2) 37 : cluster [DBG] 5.e scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 647168 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 37)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:42.760250+0000 osd.2 (osd.2) 36 : cluster [DBG] 5.e scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:42.770912+0000 osd.2 (osd.2) 37 : cluster [DBG] 5.e scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:14.639868+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62300160 unmapped: 606208 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:15.640047+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:44.779067+0000 osd.2 (osd.2) 38 : cluster [DBG] 5.d scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:44.789617+0000 osd.2 (osd.2) 39 : cluster [DBG] 5.d scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62300160 unmapped: 606208 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 39)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:44.779067+0000 osd.2 (osd.2) 38 : cluster [DBG] 5.d scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:44.789617+0000 osd.2 (osd.2) 39 : cluster [DBG] 5.d scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:16.640381+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 598016 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:17.640676+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 598016 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 382164 data_alloc: 218103808 data_used: 252
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:18.640926+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 589824 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:19.641255+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 589824 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:20.641614+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.782402992s of 10.014264107s, submitted: 6
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 589824 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:21.641872+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:50.761568+0000 osd.2 (osd.2) 40 : cluster [DBG] 5.1b scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:50.771694+0000 osd.2 (osd.2) 41 : cluster [DBG] 5.1b scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 41)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:50.761568+0000 osd.2 (osd.2) 40 : cluster [DBG] 5.1b scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:50.771694+0000 osd.2 (osd.2) 41 : cluster [DBG] 5.1b scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 581632 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:22.642230+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 581632 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 386990 data_alloc: 218103808 data_used: 252
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:23.642381+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:52.744315+0000 osd.2 (osd.2) 42 : cluster [DBG] 2.1e scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:52.754832+0000 osd.2 (osd.2) 43 : cluster [DBG] 2.1e scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 540672 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 43)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:52.744315+0000 osd.2 (osd.2) 42 : cluster [DBG] 2.1e scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:52.754832+0000 osd.2 (osd.2) 43 : cluster [DBG] 2.1e scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:24.642685+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:53.727713+0000 osd.2 (osd.2) 44 : cluster [DBG] 2.0 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:53.738254+0000 osd.2 (osd.2) 45 : cluster [DBG] 2.0 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 532480 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 45)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:53.727713+0000 osd.2 (osd.2) 44 : cluster [DBG] 2.0 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:53.738254+0000 osd.2 (osd.2) 45 : cluster [DBG] 2.0 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:25.642986+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 516096 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:26.643243+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62357504 unmapped: 548864 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:27.643455+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:56.694657+0000 osd.2 (osd.2) 46 : cluster [DBG] 6.f scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:56.708589+0000 osd.2 (osd.2) 47 : cluster [DBG] 6.f scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62357504 unmapped: 548864 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 394225 data_alloc: 218103808 data_used: 252
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 47)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:56.694657+0000 osd.2 (osd.2) 46 : cluster [DBG] 6.f scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:56.708589+0000 osd.2 (osd.2) 47 : cluster [DBG] 6.f scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:28.643726+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:57.652986+0000 osd.2 (osd.2) 48 : cluster [DBG] 4.1a scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:14:57.663600+0000 osd.2 (osd.2) 49 : cluster [DBG] 4.1a scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 622592 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 49)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:57.652986+0000 osd.2 (osd.2) 48 : cluster [DBG] 4.1a scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:14:57.663600+0000 osd.2 (osd.2) 49 : cluster [DBG] 4.1a scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:29.643984+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 622592 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:30.644205+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 622592 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:31.644481+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 614400 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:32.644677+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 614400 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 394225 data_alloc: 218103808 data_used: 252
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:33.644961+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.788371086s of 12.929638863s, submitted: 10
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62300160 unmapped: 1654784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:34.645242+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:03.691462+0000 osd.2 (osd.2) 50 : cluster [DBG] 4.18 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:03.702058+0000 osd.2 (osd.2) 51 : cluster [DBG] 4.18 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 1646592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 51)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:03.691462+0000 osd.2 (osd.2) 50 : cluster [DBG] 4.18 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:03.702058+0000 osd.2 (osd.2) 51 : cluster [DBG] 4.18 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:35.645503+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:04.698786+0000 osd.2 (osd.2) 52 : cluster [DBG] 4.e scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:04.709334+0000 osd.2 (osd.2) 53 : cluster [DBG] 4.e scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 1638400 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 53)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:04.698786+0000 osd.2 (osd.2) 52 : cluster [DBG] 4.e scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:04.709334+0000 osd.2 (osd.2) 53 : cluster [DBG] 4.e scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:36.645765+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 1638400 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:37.645952+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:36 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 1638400 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 399049 data_alloc: 218103808 data_used: 252
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:38.646100+0000)
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:08.634201+0000 osd.2 (osd.2) 54 : cluster [DBG] 4.1 scrub starts
Jan 29 09:44:36 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:08.644922+0000 osd.2 (osd.2) 55 : cluster [DBG] 4.1 scrub ok
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:36 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 1630208 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:36 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:39.646305+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 55)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:08.634201+0000 osd.2 (osd.2) 54 : cluster [DBG] 4.1 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:08.644922+0000 osd.2 (osd.2) 55 : cluster [DBG] 4.1 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 1630208 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:40.646485+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 1630208 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:41.646674+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 1622016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:42.646833+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 1622016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 401460 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:43.646960+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62341120 unmapped: 1613824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:44.647169+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62341120 unmapped: 1613824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:45.647363+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62341120 unmapped: 1613824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:46.647594+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62349312 unmapped: 1605632 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:47.647734+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62349312 unmapped: 1605632 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 401460 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:48.648071+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62357504 unmapped: 1597440 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:49.648260+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62357504 unmapped: 1597440 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.788869858s of 16.955900192s, submitted: 6
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:50.648392+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 1 last_log 56 sent 55 num 1 unsent 1 sending 1
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:20.647273+0000 osd.2 (osd.2) 56 : cluster [DBG] 4.a scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 56)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:20.647273+0000 osd.2 (osd.2) 56 : cluster [DBG] 4.a scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 1589248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:51.648677+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 1 last_log 57 sent 56 num 1 unsent 1 sending 1
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:20.657986+0000 osd.2 (osd.2) 57 : cluster [DBG] 4.a scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 57)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:20.657986+0000 osd.2 (osd.2) 57 : cluster [DBG] 4.a scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 1581056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:52.649736+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 1581056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 403871 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:53.649988+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 1564672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:54.650218+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 1564672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:55.650398+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62398464 unmapped: 1556480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:56.650613+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62398464 unmapped: 1556480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:57.650759+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:27.594927+0000 osd.2 (osd.2) 58 : cluster [DBG] 6.8 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:27.605763+0000 osd.2 (osd.2) 59 : cluster [DBG] 6.8 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 59)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:27.594927+0000 osd.2 (osd.2) 58 : cluster [DBG] 6.8 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:27.605763+0000 osd.2 (osd.2) 59 : cluster [DBG] 6.8 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62406656 unmapped: 1548288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 406282 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:58.651059+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 1540096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:59.651239+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 1540096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:00.651401+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:29.662755+0000 osd.2 (osd.2) 60 : cluster [DBG] 6.15 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:29.680505+0000 osd.2 (osd.2) 61 : cluster [DBG] 6.15 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 61)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:29.662755+0000 osd.2 (osd.2) 60 : cluster [DBG] 6.15 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:29.680505+0000 osd.2 (osd.2) 61 : cluster [DBG] 6.15 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 1540096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.951041222s of 10.965127945s, submitted: 6
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:01.651632+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:31.611850+0000 osd.2 (osd.2) 62 : cluster [DBG] 6.14 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:31.636611+0000 osd.2 (osd.2) 63 : cluster [DBG] 6.14 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 1515520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 63)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:31.611850+0000 osd.2 (osd.2) 62 : cluster [DBG] 6.14 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:31.636611+0000 osd.2 (osd.2) 63 : cluster [DBG] 6.14 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:02.651976+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:32.582089+0000 osd.2 (osd.2) 64 : cluster [DBG] 4.13 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:32.592671+0000 osd.2 (osd.2) 65 : cluster [DBG] 4.13 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 1515520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 413521 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 65)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:32.582089+0000 osd.2 (osd.2) 64 : cluster [DBG] 4.13 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:32.592671+0000 osd.2 (osd.2) 65 : cluster [DBG] 4.13 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:03.652275+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:33.558918+0000 osd.2 (osd.2) 66 : cluster [DBG] 4.11 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:33.569375+0000 osd.2 (osd.2) 67 : cluster [DBG] 4.11 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 1490944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:04.652497+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 67)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:33.558918+0000 osd.2 (osd.2) 66 : cluster [DBG] 4.11 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:33.569375+0000 osd.2 (osd.2) 67 : cluster [DBG] 4.11 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 1490944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:05.652721+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:35.525904+0000 osd.2 (osd.2) 68 : cluster [DBG] 4.1c scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:35.536484+0000 osd.2 (osd.2) 69 : cluster [DBG] 4.1c scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 69)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:35.525904+0000 osd.2 (osd.2) 68 : cluster [DBG] 4.1c scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:35.536484+0000 osd.2 (osd.2) 69 : cluster [DBG] 4.1c scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 1474560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:06.652987+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 1474560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:07.653264+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 1474560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 418347 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:08.653591+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62488576 unmapped: 1466368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:09.653909+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62488576 unmapped: 1466368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:10.654091+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:40.554658+0000 osd.2 (osd.2) 70 : cluster [DBG] 6.11 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:40.565208+0000 osd.2 (osd.2) 71 : cluster [DBG] 6.11 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62504960 unmapped: 1449984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 71)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:40.554658+0000 osd.2 (osd.2) 70 : cluster [DBG] 6.11 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:40.565208+0000 osd.2 (osd.2) 71 : cluster [DBG] 6.11 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:11.654527+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:41.566788+0000 osd.2 (osd.2) 72 : cluster [DBG] 6.13 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:41.584440+0000 osd.2 (osd.2) 73 : cluster [DBG] 6.13 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.617183685s of 10.058506966s, submitted: 11
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62529536 unmapped: 1425408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 73)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:41.566788+0000 osd.2 (osd.2) 72 : cluster [DBG] 6.13 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:41.584440+0000 osd.2 (osd.2) 73 : cluster [DBG] 6.13 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:12.654832+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:42.528591+0000 osd.2 (osd.2) 74 : cluster [DBG] 6.1f scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:42.546260+0000 osd.2 (osd.2) 75 : cluster [DBG] 6.1f scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62529536 unmapped: 1425408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 425586 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 75)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:42.528591+0000 osd.2 (osd.2) 74 : cluster [DBG] 6.1f scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:42.546260+0000 osd.2 (osd.2) 75 : cluster [DBG] 6.1f scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:13.655071+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 1417216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:14.655307+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 1417216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:15.655494+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 1409024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:16.655715+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 1409024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:17.655996+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:47.576186+0000 osd.2 (osd.2) 76 : cluster [DBG] 3.18 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:47.586756+0000 osd.2 (osd.2) 77 : cluster [DBG] 3.18 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 77)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:47.576186+0000 osd.2 (osd.2) 76 : cluster [DBG] 3.18 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:47.586756+0000 osd.2 (osd.2) 77 : cluster [DBG] 3.18 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 1409024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 427999 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:18.656259+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 1400832 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:19.656447+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:49.549653+0000 osd.2 (osd.2) 78 : cluster [DBG] 3.16 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:49.560215+0000 osd.2 (osd.2) 79 : cluster [DBG] 3.16 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 79)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:49.549653+0000 osd.2 (osd.2) 78 : cluster [DBG] 3.16 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:49.560215+0000 osd.2 (osd.2) 79 : cluster [DBG] 3.16 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 1392640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:20.656714+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:50.504847+0000 osd.2 (osd.2) 80 : cluster [DBG] 7.11 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:50.515417+0000 osd.2 (osd.2) 81 : cluster [DBG] 7.11 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 81)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:50.504847+0000 osd.2 (osd.2) 80 : cluster [DBG] 7.11 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:50.515417+0000 osd.2 (osd.2) 81 : cluster [DBG] 7.11 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 1368064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:21.656953+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 1368064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:22.657189+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 1368064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432825 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:23.657381+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 1335296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.624273300s of 12.843159676s, submitted: 9
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:24.657623+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:54.514117+0000 osd.2 (osd.2) 82 : cluster [DBG] 4.1b scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:54.524653+0000 osd.2 (osd.2) 83 : cluster [DBG] 4.1b scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 83)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:54.514117+0000 osd.2 (osd.2) 82 : cluster [DBG] 4.1b scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:54.524653+0000 osd.2 (osd.2) 83 : cluster [DBG] 4.1b scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 1327104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:25.657899+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:55.480797+0000 osd.2 (osd.2) 84 : cluster [DBG] 7.15 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:55.491341+0000 osd.2 (osd.2) 85 : cluster [DBG] 7.15 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 85)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:55.480797+0000 osd.2 (osd.2) 84 : cluster [DBG] 7.15 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:55.491341+0000 osd.2 (osd.2) 85 : cluster [DBG] 7.15 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 1318912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:26.658152+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:56.521936+0000 osd.2 (osd.2) 86 : cluster [DBG] 3.11 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:15:56.532509+0000 osd.2 (osd.2) 87 : cluster [DBG] 3.11 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 87)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:56.521936+0000 osd.2 (osd.2) 86 : cluster [DBG] 3.11 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:15:56.532509+0000 osd.2 (osd.2) 87 : cluster [DBG] 3.11 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 1318912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:27.658544+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 1318912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 440064 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:28.658946+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 1310720 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:29.659650+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 1310720 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:30.660202+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62652416 unmapped: 1302528 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:31.660859+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62652416 unmapped: 1302528 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:32.661291+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:02.479396+0000 osd.2 (osd.2) 88 : cluster [DBG] 7.a scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:02.490070+0000 osd.2 (osd.2) 89 : cluster [DBG] 7.a scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 89)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:02.479396+0000 osd.2 (osd.2) 88 : cluster [DBG] 7.a scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:02.490070+0000 osd.2 (osd.2) 89 : cluster [DBG] 7.a scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 442475 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 1286144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:33.661907+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 1286144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:34.662254+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 1286144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:35.662406+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 1277952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:36.662567+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 1277952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.825295448s of 13.027392387s, submitted: 8
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:37.662796+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:07.541667+0000 osd.2 (osd.2) 90 : cluster [DBG] 7.1c scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:07.552248+0000 osd.2 (osd.2) 91 : cluster [DBG] 7.1c scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 91)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:07.541667+0000 osd.2 (osd.2) 90 : cluster [DBG] 7.1c scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:07.552248+0000 osd.2 (osd.2) 91 : cluster [DBG] 7.1c scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 444888 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 1269760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:38.663115+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 1269760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:39.663482+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:09.511755+0000 osd.2 (osd.2) 92 : cluster [DBG] 3.e scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:09.522381+0000 osd.2 (osd.2) 93 : cluster [DBG] 3.e scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 93)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:09.511755+0000 osd.2 (osd.2) 92 : cluster [DBG] 3.e scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:09.522381+0000 osd.2 (osd.2) 93 : cluster [DBG] 3.e scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 1253376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:40.663762+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 1253376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:41.664050+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 1245184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:42.664550+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:12.509997+0000 osd.2 (osd.2) 94 : cluster [DBG] 7.8 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:12.520583+0000 osd.2 (osd.2) 95 : cluster [DBG] 7.8 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 95)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:12.509997+0000 osd.2 (osd.2) 94 : cluster [DBG] 7.8 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:12.520583+0000 osd.2 (osd.2) 95 : cluster [DBG] 7.8 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 449710 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62726144 unmapped: 1228800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:43.664799+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 1204224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:44.664967+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 1204224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:45.665219+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 1171456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:46.665375+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:16.425566+0000 osd.2 (osd.2) 96 : cluster [DBG] 7.2 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:16.436181+0000 osd.2 (osd.2) 97 : cluster [DBG] 7.2 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 97)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:16.425566+0000 osd.2 (osd.2) 96 : cluster [DBG] 7.2 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:16.436181+0000 osd.2 (osd.2) 97 : cluster [DBG] 7.2 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 1171456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:47.665674+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:17.386841+0000 osd.2 (osd.2) 98 : cluster [DBG] 7.1 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:17.397435+0000 osd.2 (osd.2) 99 : cluster [DBG] 7.1 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 99)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:17.386841+0000 osd.2 (osd.2) 98 : cluster [DBG] 7.1 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:17.397435+0000 osd.2 (osd.2) 99 : cluster [DBG] 7.1 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454532 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1163264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:48.665854+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1163264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:49.666036+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1163264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:50.666190+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.886688232s of 13.916419983s, submitted: 10
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1163264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:51.666356+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:21.457997+0000 osd.2 (osd.2) 100 : cluster [DBG] 3.7 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:21.468603+0000 osd.2 (osd.2) 101 : cluster [DBG] 3.7 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 101)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:21.457997+0000 osd.2 (osd.2) 100 : cluster [DBG] 3.7 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:21.468603+0000 osd.2 (osd.2) 101 : cluster [DBG] 3.7 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1163264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:52.666663+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 459354 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1146880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:53.666851+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:23.451643+0000 osd.2 (osd.2) 102 : cluster [DBG] 7.5 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:23.462269+0000 osd.2 (osd.2) 103 : cluster [DBG] 7.5 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 103)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:23.451643+0000 osd.2 (osd.2) 102 : cluster [DBG] 7.5 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:23.462269+0000 osd.2 (osd.2) 103 : cluster [DBG] 7.5 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1146880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:54.667109+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:24.449104+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.c scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:24.459710+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.c scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 105)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:24.449104+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.c scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:24.459710+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.c scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1138688 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:55.667441+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:25.453408+0000 osd.2 (osd.2) 106 : cluster [DBG] 7.1a scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:25.463988+0000 osd.2 (osd.2) 107 : cluster [DBG] 7.1a scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 107)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:25.453408+0000 osd.2 (osd.2) 106 : cluster [DBG] 7.1a scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:25.463988+0000 osd.2 (osd.2) 107 : cluster [DBG] 7.1a scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 1130496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:56.667897+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 1130496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:57.668120+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 464178 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 1122304 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:58.668364+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 1130496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:59.668606+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:29.517001+0000 osd.2 (osd.2) 108 : cluster [DBG] 7.e scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:29.527585+0000 osd.2 (osd.2) 109 : cluster [DBG] 7.e scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 109)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:29.517001+0000 osd.2 (osd.2) 108 : cluster [DBG] 7.e scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:29.527585+0000 osd.2 (osd.2) 109 : cluster [DBG] 7.e scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 1130496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:00.669369+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 1122304 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:01.669581+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.979720116s of 11.018548965s, submitted: 10
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 1114112 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:02.669875+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:32.476637+0000 osd.2 (osd.2) 110 : cluster [DBG] 3.1d scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:32.487068+0000 osd.2 (osd.2) 111 : cluster [DBG] 3.1d scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 111)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:32.476637+0000 osd.2 (osd.2) 110 : cluster [DBG] 3.1d scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:32.487068+0000 osd.2 (osd.2) 111 : cluster [DBG] 3.1d scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 469002 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 1105920 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:03.670419+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 1081344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:04.670619+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 1081344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:05.671116+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:35.448290+0000 osd.2 (osd.2) 112 : cluster [DBG] 3.1e scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:35.458773+0000 osd.2 (osd.2) 113 : cluster [DBG] 3.1e scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 113)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:35.448290+0000 osd.2 (osd.2) 112 : cluster [DBG] 3.1e scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:35.458773+0000 osd.2 (osd.2) 113 : cluster [DBG] 3.1e scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 1073152 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:06.671541+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 1064960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:07.672052+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:37.460773+0000 osd.2 (osd.2) 114 : cluster [DBG] 3.5 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:37.471263+0000 osd.2 (osd.2) 115 : cluster [DBG] 3.5 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 115)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:37.460773+0000 osd.2 (osd.2) 114 : cluster [DBG] 3.5 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:37.471263+0000 osd.2 (osd.2) 115 : cluster [DBG] 3.5 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473826 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 1056768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:08.672419+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 1048576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:09.672756+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:39.443892+0000 osd.2 (osd.2) 116 : cluster [DBG] 3.8 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  will send 2026-01-29T09:16:39.454490+0000 osd.2 (osd.2) 117 : cluster [DBG] 3.8 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client handle_log_ack log(last 117)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:39.443892+0000 osd.2 (osd.2) 116 : cluster [DBG] 3.8 scrub starts
Jan 29 09:44:37 compute-0 ceph-osd[88193]: log_client  logged 2026-01-29T09:16:39.454490+0000 osd.2 (osd.2) 117 : cluster [DBG] 3.8 scrub ok
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:10.673063+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:11.673290+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:12.673515+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1032192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:13.673672+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1032192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:14.673852+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:15.674018+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:16.674194+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:17.674474+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1015808 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:18.674773+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1007616 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:19.675047+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1007616 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:20.675340+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 999424 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:21.675522+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 999424 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:22.675663+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 991232 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:23.675796+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:24.676038+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:25.676287+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:26.676526+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 958464 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:27.676724+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 958464 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:28.676901+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:29.677057+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:30.677273+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:31.677554+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 942080 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:32.677724+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 942080 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:33.677950+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 942080 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:34.678215+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 933888 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:35.678485+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 933888 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:36.678730+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 925696 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:37.678880+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 925696 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:38.679044+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 909312 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:39.679260+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 909312 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:40.679465+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 909312 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:41.679709+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 901120 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:42.680079+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 892928 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:43.680244+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 884736 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:44.680447+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 868352 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:45.680586+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 868352 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:46.680767+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 860160 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:47.680944+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 860160 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:48.681097+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 860160 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:49.681209+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 851968 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:50.681365+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 851968 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:51.681678+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 843776 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:52.682314+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 843776 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:53.682633+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 843776 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:54.682901+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 819200 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:55.683079+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 819200 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:56.683228+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:57.683660+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:58.683813+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:59.684030+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 802816 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:00.684214+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 802816 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:01.684420+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:02.684561+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:03.684709+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 786432 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:04.684883+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 761856 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:05.685325+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 761856 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:06.685585+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:07.685757+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:08.686210+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:09.686649+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:10.687037+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:11.687314+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 729088 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:12.687456+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 729088 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:13.687618+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:14.687784+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 770048 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:15.687943+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 770048 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:16.688263+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 761856 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:17.688461+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 761856 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:18.688765+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:19.689046+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:20.689336+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:21.690340+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:22.690613+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:23.690835+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:24.691070+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:25.691254+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:26.691456+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:27.691724+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:28.691992+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:29.692268+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 696320 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:30.692741+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 696320 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:31.693116+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:32.693407+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:33.693777+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 679936 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:34.694060+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:35.694252+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:36.694516+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:37.694827+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:38.695030+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:39.695238+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:40.695381+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:41.695631+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:42.696017+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:43.696177+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:44.696357+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:45.696562+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:46.696697+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:47.696850+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:48.697019+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:49.697201+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 614400 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:50.697427+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 589824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:51.697673+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 589824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:52.697832+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 581632 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:53.697976+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 565248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:54.698177+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 557056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:55.698337+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 557056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:56.698488+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 557056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:57.698704+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:58.698930+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:59.699119+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:00.699299+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:01.699462+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:02.699593+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:03.699723+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:04.699851+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:05.700004+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:06.700204+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:07.700429+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 524288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:08.700628+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 524288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:09.700743+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 524288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:10.700901+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:11.701435+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:12.701613+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 507904 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:13.701755+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 507904 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:14.701881+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 499712 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:15.702008+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 499712 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:16.702145+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 499712 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:17.702296+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:18.702447+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:19.702647+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:20.702843+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 483328 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:21.703198+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 483328 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:22.703366+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 475136 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:23.703571+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:24.703800+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:25.704016+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:26.704269+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:27.704428+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:28.704627+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:29.704814+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:30.704975+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 442368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:31.705193+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 442368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:32.705412+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 442368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:33.705707+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:34.705975+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:35.706247+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:36.706398+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:37.706593+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 417792 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:38.706804+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1064: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 417792 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:39.707020+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 417792 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:40.707251+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:41.707491+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:42.707704+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:43.707877+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:44.708182+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:45.708392+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 385024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:46.708598+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 385024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:47.708769+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 376832 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:48.708932+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 376832 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:49.709177+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:50.709308+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:51.709504+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:52.709679+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:53.709917+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:54.710084+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:55.710258+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:56.710512+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:57.710750+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:58.710906+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:59.711087+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:00.711267+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 335872 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:01.711508+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 335872 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:02.711716+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:03.711985+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:04.712194+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:05.712388+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:06.712616+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:07.712805+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:08.712969+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:09.713196+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:10.713371+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:11.713575+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:12.713758+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:13.714006+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 286720 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:14.714200+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 286720 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:15.714369+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 278528 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:16.714553+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 278528 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:17.714732+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 278528 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:18.715011+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:19.715166+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:20.715332+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 262144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:21.715622+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 262144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:22.715853+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:23.715991+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:24.716240+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:25.716416+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:26.716573+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 229376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:27.716788+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 229376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:28.717052+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:29.717204+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:30.717375+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 212992 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:31.717634+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 212992 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:32.717892+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:33.718043+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:34.718245+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:35.718475+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:36.718669+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:37.718825+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:38.719008+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 188416 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:39.719248+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 188416 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:40.719425+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 188416 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:41.719627+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 180224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:42.719754+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 180224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:43.719914+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 172032 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:44.720088+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 172032 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:45.720262+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 172032 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:46.720471+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:47.720694+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:48.720839+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:49.721043+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 155648 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:50.721236+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 155648 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:51.721428+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:52.721570+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:53.721731+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:54.721868+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:55.722056+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:56.722268+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 131072 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:57.722432+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 131072 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:58.722677+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 131072 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:59.722904+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:00.723103+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:01.723412+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:02.723669+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 114688 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:03.723809+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:04.723963+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 114688 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:05.724434+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 114688 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:06.724616+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 106496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:07.725348+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 106496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:08.725489+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 106496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:09.725654+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 106496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:10.725813+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 98304 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:11.726051+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 98304 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:12.726227+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 90112 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:13.726410+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 90112 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:14.726599+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 81920 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:15.726770+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 81920 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:16.726920+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 81920 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:17.727088+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 73728 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:18.727241+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 73728 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:19.727458+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 65536 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:20.727691+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 65536 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:21.727970+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 65536 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:22.728168+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 57344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:23.728395+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 57344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:24.728688+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 57344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:25.728883+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 49152 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:26.729073+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 49152 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:27.729203+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:28.729341+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:29.729486+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:30.729607+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:31.729835+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:32.729979+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:33.730194+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:34.730368+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:35.730603+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:36.730771+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:37.730972+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:38.731215+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:39.731466+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:40.731727+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:41.732058+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:42.732272+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:43.732501+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:44.732746+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:45.732956+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:46.733234+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:47.733386+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:48.733552+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:49.733724+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:50.733889+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:51.734088+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:52.734244+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:53.734441+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:54.734624+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:55.734788+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:56.734975+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:57.735188+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:58.735310+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:59.735403+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:00.735568+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:01.735791+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:02.735948+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:03.736094+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:04.736228+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:05.736429+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:06.736622+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:07.736793+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:08.736938+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:09.737096+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:10.737262+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:11.737445+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:12.737598+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:13.737789+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:14.737985+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:15.738350+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:16.738511+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:17.738706+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:18.738901+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:19.739059+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:20.739205+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 925696 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:21.739384+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 925696 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:22.739570+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:23.739717+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:24.739876+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:25.740051+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:26.740187+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:27.740304+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:28.740490+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:29.740610+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:30.740764+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:31.740976+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:32.741235+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:33.741397+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:34.741581+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:35.741719+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:36.741919+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:37.742157+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:38.742448+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:39.742643+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:40.742871+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:41.743093+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:42.743282+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:43.743448+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:44.743654+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:45.743815+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:46.744056+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:47.744241+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:48.744419+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:49.744572+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:50.744738+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:51.744913+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:52.745056+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:53.745207+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:54.745355+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:55.745502+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:56.745636+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:57.745919+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:58.746081+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:59.746441+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:00.746588+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:01.747352+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:02.747619+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:03.747770+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:04.747964+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:05.748166+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:06.748357+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:07.748510+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:08.748668+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:09.748819+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:10.749034+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:11.749202+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:12.749367+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:13.749512+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:14.749667+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:15.749884+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:16.750086+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:17.750236+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:18.750413+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:19.750655+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:20.750819+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:21.751086+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:22.751389+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:23.751587+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:24.751825+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:25.751983+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:26.752228+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:27.752379+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:28.752557+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:29.752802+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:30.752947+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:31.753163+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:32.753374+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:33.753596+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:34.754336+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:35.754473+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:36.754676+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Cumulative writes: 4171 writes, 19K keys, 4171 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4171 writes, 357 syncs, 11.68 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4171 writes, 19K keys, 4171 commit groups, 1.0 writes per commit group, ingest: 15.88 MB, 0.03 MB/s
                                           Interval WAL: 4171 writes, 357 syncs, 11.68 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:37.754831+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 598016 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:38.755036+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:39.755199+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:40.755445+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64421888 unmapped: 581632 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:41.755730+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64421888 unmapped: 581632 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:42.755938+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:43.756242+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:44.756439+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:45.756608+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64438272 unmapped: 565248 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:46.756812+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64438272 unmapped: 565248 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:47.756989+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 557056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:48.757215+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 557056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:49.757422+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 548864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:50.757589+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 548864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:51.758004+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 548864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:52.758163+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 540672 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:53.758329+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 540672 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:54.758517+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 540672 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:55.758664+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 532480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:56.758844+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 532480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:57.759004+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 532480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:58.759168+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 524288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:59.759298+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 524288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:00.759458+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:01.759641+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:02.759791+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:03.759934+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:04.760075+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:05.760197+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:06.760336+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:07.760481+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:08.760653+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:09.760846+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:10.761009+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:11.761217+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:12.761423+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:13.761619+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 475136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:14.761767+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 475136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:15.761913+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:16.762066+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:17.762201+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:18.762335+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:19.762469+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:20.762647+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 450560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:21.762869+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 450560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:22.762990+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 450560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:23.763158+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 442368 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:24.763323+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 442368 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:25.763663+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:26.763840+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:27.764022+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:28.764206+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:29.764323+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 425984 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:30.764445+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 425984 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:31.764587+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 417792 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:32.764712+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 417792 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:33.764827+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:34.764964+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:35.765113+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:36.765269+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 401408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:37.765443+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 401408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:38.765618+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 393216 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:39.765779+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 393216 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:40.765940+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 393216 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:41.766156+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:42.766308+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:43.767243+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:44.767387+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:45.767556+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 368640 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:46.767713+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 368640 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:47.767873+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 368640 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:48.768025+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:49.768195+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:50.768299+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:51.768441+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 352256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:52.768569+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 352256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:53.768708+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:54.768861+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:55.769019+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:56.769411+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:57.769611+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:58.769757+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:59.769921+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:00.770117+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:01.770342+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 319488 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:02.770468+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 319488 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:03.770627+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 319488 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:04.770787+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 311296 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:05.770973+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 311296 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:06.771205+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 303104 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:07.771426+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 303104 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:08.771659+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 303104 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:09.771840+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 294912 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:10.772026+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 294912 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:11.772239+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 286720 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:12.772435+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 286720 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:13.772677+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 286720 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:14.772875+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 278528 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:15.773070+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 278528 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:16.773225+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 278528 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:17.773418+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 270336 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:18.773565+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 270336 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:19.773722+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 262144 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:20.773900+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 262144 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:21.774160+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 253952 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:22.774378+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 253952 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:23.774624+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 253952 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:24.774808+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 245760 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:25.774964+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 245760 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:26.775154+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 245760 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:27.775303+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64765952 unmapped: 237568 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:28.775423+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64765952 unmapped: 237568 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:29.775574+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 229376 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:30.775717+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 229376 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:31.775872+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 229376 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:32.776032+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64782336 unmapped: 221184 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:33.776238+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64782336 unmapped: 221184 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:34.776395+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 212992 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:35.776521+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 212992 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:36.776646+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:37.776813+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:38.776975+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:39.777180+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:40.777391+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:41.777609+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:42.777739+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:43.777890+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:44.778039+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:45.778199+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:46.778395+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:47.778568+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:48.778738+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:49.778895+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:50.779235+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:51.779414+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:52.779692+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:53.779845+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:54.780053+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:55.780250+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:56.780450+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:57.780670+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:58.780898+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:59.781170+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:00.781334+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:01.781518+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:02.781675+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:03.781943+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:04.782103+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:05.782289+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:06.782485+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:07.782645+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:08.782812+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:09.782950+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:10.783160+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:11.783368+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:12.783562+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:13.783735+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:14.783855+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:15.784046+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:16.784196+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:17.784349+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:18.784524+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:19.784710+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:20.784914+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:21.785097+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:22.785200+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:23.785334+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:24.785483+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:25.785640+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:26.785847+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:27.786025+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:28.786255+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:29.786424+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:30.786621+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:31.786842+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:32.787051+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:33.787257+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:34.787466+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:35.787631+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:36.787794+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:37.787982+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:38.788299+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:39.788567+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:40.789330+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:41.789645+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:42.790081+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:43.790693+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:44.790848+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:45.791002+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:46.791174+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:47.791346+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:48.791544+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:49.791732+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:50.791937+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:51.792101+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:52.792330+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:53.792510+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:54.792636+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:55.792815+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:56.793003+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:57.793168+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:58.793341+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:59.793476+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:00.793637+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:01.793810+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:02.793994+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:03.794194+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:04.794330+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:05.794480+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:06.794626+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:07.794762+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:08.794938+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:09.795116+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:10.795326+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:11.795519+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:12.795675+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:13.795850+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:14.796094+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:15.796195+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:16.796376+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:17.796552+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:18.796694+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:19.796833+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:20.797003+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:21.797250+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:22.797385+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:23.797528+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:24.797704+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:25.797891+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:26.798033+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:27.798203+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:28.798441+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:29.798572+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:30.798762+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:31.798969+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:32.799188+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:33.799315+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:34.799466+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:35.799665+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:36.799830+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:37.800037+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:38.800247+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:39.800429+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:40.800566+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:41.800769+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:42.800966+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:43.801123+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:44.801340+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:45.801568+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:46.801770+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:47.801945+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:48.802194+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:49.802424+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:50.802616+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:51.802906+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:52.803084+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:53.803220+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:54.803461+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:55.803716+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:56.804197+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:57.804413+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:58.804747+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:59.805037+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:00.805223+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:01.805456+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:02.805663+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:03.805842+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:04.806029+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:05.806227+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:06.806443+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:07.806595+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:08.806801+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:09.807011+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:10.807240+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:11.807433+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:12.807586+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:13.807769+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:14.807958+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:15.808161+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:16.808310+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:17.808685+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:18.808880+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:19.809565+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:20.809733+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:21.810104+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:22.810842+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:23.811305+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:24.811462+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:25.812543+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:26.813247+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:27.813460+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:28.813610+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:29.813783+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:30.814295+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:31.814633+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:32.814793+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:33.814942+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:34.815100+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:35.815217+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:36.815385+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:37.815682+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:38.815849+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:39.816067+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:40.816324+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:41.816638+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:42.816914+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:43.817059+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:44.817219+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:45.817358+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:46.817556+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:47.817964+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:48.818111+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: mgrc ms_handle_reset ms_handle_reset con 0x55bdea7d6000
Jan 29 09:44:37 compute-0 ceph-osd[88193]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1795618739
Jan 29 09:44:37 compute-0 ceph-osd[88193]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1795618739,v1:192.168.122.100:6801/1795618739]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: get_auth_request con 0x55bdeb2a2000 auth_method 0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: mgrc handle_mgr_configure stats_period=5
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:49.818281+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:50.818421+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:51.818585+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:52.818779+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:53.818969+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:54.819122+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:55.819315+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:56.819505+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:57.819691+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:58.819797+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:59.819951+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:00.820084+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:01.820275+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:02.820414+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:03.820561+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:04.821240+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:05.821875+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:06.821991+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:07.822119+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:08.822244+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:09.822394+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:10.822557+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:11.822766+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:12.822940+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:13.823062+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:14.823208+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:15.823377+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:16.823626+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:17.823822+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:18.823991+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:19.824263+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:20.824432+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:21.824630+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:22.824771+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:23.825175+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:24.825329+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:25.825476+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:26.825692+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:27.825869+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:28.826021+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:29.826238+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:30.826421+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:31.826602+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:32.826785+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:33.827026+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:34.827179+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:35.827312+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:36.827521+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:37.827691+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 1187840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:38.827869+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:39.828048+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:40.828195+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:41.828346+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:42.828480+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:43.828706+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:44.828922+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:45.829078+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:46.829295+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:47.829499+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:48.829698+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:49.829932+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:50.830124+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:51.830440+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:52.830622+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:53.830908+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:54.831086+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:55.831246+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:56.831577+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:57.831834+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:58.832046+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:59.832289+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:00.832442+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:01.832685+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:02.832893+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:03.833102+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:04.833306+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:05.833511+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:06.833714+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:07.833876+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:08.834112+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:09.834346+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:10.834567+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:11.834761+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:12.834917+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:13.835092+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:14.835259+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:15.835418+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:16.835566+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:17.835760+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:18.836012+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:19.836414+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:20.836571+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:21.836763+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:22.836913+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:23.837086+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:24.837224+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:25.837408+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:26.837510+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:27.837622+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:28.837769+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:29.837882+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:30.838006+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:31.838184+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:32.838319+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:33.838415+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:34.838531+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:35.838646+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:36.838796+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:37.838986+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:38.839128+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:39.839297+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:40.839426+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:41.839624+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:42.839774+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:43.839986+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:44.840125+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:45.840292+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:46.840521+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:47.840663+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:48.840806+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:49.840957+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:50.841212+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:51.841438+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:52.841599+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:53.841809+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:54.842009+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:55.842216+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:56.842410+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:57.842558+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:58.842734+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:59.842878+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:00.843074+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:01.843237+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:02.843376+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:03.843704+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:04.843882+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:05.844091+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:06.844262+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:07.844470+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:08.844652+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:09.844864+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:10.845042+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:11.845298+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:12.845465+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:13.845635+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:14.845826+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:15.845971+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:16.846090+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:17.846210+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:18.846343+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:19.846603+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:20.846844+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:21.847173+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:22.847372+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:23.847525+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:24.847778+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:25.847958+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 1179648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:26.848236+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:27.848454+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:28.848613+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:29.848787+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:30.849314+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:31.849701+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:32.849947+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:33.850173+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:34.850363+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:35.850653+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:36.850928+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:37.851198+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:38.851381+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:39.851561+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:40.851731+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:41.851915+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:42.852051+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:43.852265+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:44.852446+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:45.852655+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:46.852845+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:47.853005+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:48.853203+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:49.853383+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:50.853559+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:51.853750+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:52.853890+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:53.854027+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:54.854206+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:55.854421+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:56.854594+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:57.854791+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 1171456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:58.854917+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:59.855117+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:00.855338+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:01.855526+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:02.855730+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:03.855902+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:04.856079+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:05.856248+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:06.856441+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:07.856602+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:08.856789+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:09.856935+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:10.857115+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:11.857381+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:12.857520+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:13.857769+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:14.857941+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:15.858120+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:16.858322+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:17.858476+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:18.858629+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:19.858792+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:20.858966+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:21.859076+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:22.859233+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:23.861213+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:24.861403+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:25.861571+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:26.861703+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:27.861835+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:28.861979+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:29.862126+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:30.862364+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:31.862638+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:32.862870+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:33.863048+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:34.863249+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:35.863436+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:36.863614+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:37.863767+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:38.863991+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:39.864202+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:40.864384+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:41.864561+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:42.864735+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:43.864923+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:44.865190+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:45.865416+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:46.865621+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:47.865808+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:48.865978+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:49.866171+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:50.866305+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:51.866501+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:52.866669+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:53.866824+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:54.866988+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:55.867256+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:56.867482+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:57.867661+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:58.867854+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:59.868027+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:00.868218+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:01.868400+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:02.868527+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:03.868816+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:04.868988+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:05.869164+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:06.869290+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476237 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:07.869385+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 1163264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:08.869516+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 1032192 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 46 handle_osd_map epochs [47,47], i have 46, src has [1,47]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 967.346862793s of 967.389648438s, submitted: 8
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:09.869660+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 47 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x2eb8a/0x7b000, compress 0x0/0x0/0x0, omap 0x65db, meta 0x1a29a25), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 1032192 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:10.869736+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 47 handle_osd_map epochs [48,48], i have 47, src has [1,48]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 17571840 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:11.869885+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 48 handle_osd_map epochs [49,49], i have 48, src has [1,49]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 49 ms_handle_reset con 0x55bdeb712400 session 0x55bdeb7be700
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 534206 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 17539072 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:12.869999+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712c00
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 17235968 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:13.870108+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 49 ms_handle_reset con 0x55bdeb712c00 session 0x55bdeac19340
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 49 handle_osd_map epochs [50,50], i have 49, src has [1,50]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 17186816 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:14.870194+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 17186816 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:15.870332+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 50 heartbeat osd_stat(store_statfs(0x4fcccc000/0x0/0x4ffc00000, data 0x14a43a0/0x14fa000, compress 0x0/0x0/0x0, omap 0x7190, meta 0x1a28e70), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 17186816 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:16.870504+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 602690 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 17186816 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 50 handle_osd_map epochs [51,51], i have 50, src has [1,51]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:17.870656+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 17203200 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:18.870831+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 17211392 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:19.870971+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:20.871095+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 51 heartbeat osd_stat(store_statfs(0x4fcccd000/0x0/0x4ffc00000, data 0x14a5850/0x14fd000, compress 0x0/0x0/0x0, omap 0x7465, meta 0x1a28b9b), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 51 heartbeat osd_stat(store_statfs(0x4fcccd000/0x0/0x4ffc00000, data 0x14a5850/0x14fd000, compress 0x0/0x0/0x0, omap 0x7465, meta 0x1a28b9b), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:21.871290+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 603942 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:22.871437+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:23.871591+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:24.871749+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:25.871865+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 51 heartbeat osd_stat(store_statfs(0x4fcccd000/0x0/0x4ffc00000, data 0x14a5850/0x14fd000, compress 0x0/0x0/0x0, omap 0x7465, meta 0x1a28b9b), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:26.871994+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 603942 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 51 heartbeat osd_stat(store_statfs(0x4fcccd000/0x0/0x4ffc00000, data 0x14a5850/0x14fd000, compress 0x0/0x0/0x0, omap 0x7465, meta 0x1a28b9b), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:27.872167+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:28.872297+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:29.872445+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:30.872728+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 51 heartbeat osd_stat(store_statfs(0x4fcccd000/0x0/0x4ffc00000, data 0x14a5850/0x14fd000, compress 0x0/0x0/0x0, omap 0x7465, meta 0x1a28b9b), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:31.872964+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 603942 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:32.873183+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 51 heartbeat osd_stat(store_statfs(0x4fcccd000/0x0/0x4ffc00000, data 0x14a5850/0x14fd000, compress 0x0/0x0/0x0, omap 0x7465, meta 0x1a28b9b), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:33.873338+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:34.873497+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:35.873664+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 17195008 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 51 heartbeat osd_stat(store_statfs(0x4fcccd000/0x0/0x4ffc00000, data 0x14a5850/0x14fd000, compress 0x0/0x0/0x0, omap 0x7465, meta 0x1a28b9b), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:36.873787+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 603942 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Cumulative writes: 4249 writes, 19K keys, 4249 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4249 writes, 388 syncs, 10.95 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 78 writes, 338 keys, 78 commit groups, 1.0 writes per commit group, ingest: 0.24 MB, 0.00 MB/s
                                           Interval WAL: 78 writes, 31 syncs, 2.52 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.12              0.00         1    0.123       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.179       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.055       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bde8e9f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 17162240 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:37.873948+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 17162240 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:38.874095+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 51 heartbeat osd_stat(store_statfs(0x4fcccd000/0x0/0x4ffc00000, data 0x14a5850/0x14fd000, compress 0x0/0x0/0x0, omap 0x7465, meta 0x1a28b9b), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 17162240 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:39.874228+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 17162240 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:40.874374+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 17162240 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:41.874510+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb713400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.725841522s of 32.102912903s, submitted: 47
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 51 heartbeat osd_stat(store_statfs(0x4fcccd000/0x0/0x4ffc00000, data 0x14a5850/0x14fd000, compress 0x0/0x0/0x0, omap 0x7465, meta 0x1a28b9b), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 605003 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 15925248 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb713c00
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:42.874614+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 51 handle_osd_map epochs [52,52], i have 51, src has [1,52]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 52 ms_handle_reset con 0x55bdeb713400 session 0x55bdeb7be000
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 52 ms_handle_reset con 0x55bdeb713c00 session 0x55bdea57da40
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 15679488 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdecc4d000
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 52 ms_handle_reset con 0x55bdecc4d000 session 0x55bdeb6cdc00
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdecc4d000
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 52 ms_handle_reset con 0x55bdecc4d000 session 0x55bdeb6cd180
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:43.874735+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 15982592 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdedcb1c00
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:44.874851+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 53 ms_handle_reset con 0x55bdedcb1c00 session 0x55bdec498540
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdedcb1000
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdec890800
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 53 ms_handle_reset con 0x55bdec890800 session 0x55bdec859880
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdecc4d400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 53 ms_handle_reset con 0x55bdecc4d400 session 0x55bdec859dc0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 53 heartbeat osd_stat(store_statfs(0x4fccc4000/0x0/0x4ffc00000, data 0x14a842d/0x1506000, compress 0x0/0x0/0x0, omap 0x8263, meta 0x1a27d9d), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 23871488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:45.874979+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdecc4c000
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 23740416 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:46.875094+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 53 handle_osd_map epochs [53,54], i have 53, src has [1,54]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 54 ms_handle_reset con 0x55bdecc4c000 session 0x55bdec87d500
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 827651 data_alloc: 218103808 data_used: 287
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 23764992 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdec890800
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:47.875243+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 54 heartbeat osd_stat(store_statfs(0x4fa4c3000/0x0/0x4ffc00000, data 0x3ca9a1b/0x3d07000, compress 0x0/0x0/0x0, omap 0x8756, meta 0x1a278aa), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 23748608 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:48.875353+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 54 handle_osd_map epochs [54,55], i have 54, src has [1,55]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 55 ms_handle_reset con 0x55bdedcb1000 session 0x55bdec498fc0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 55 ms_handle_reset con 0x55bdec890800 session 0x55bdeac196c0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 23715840 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:49.875458+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb714400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 23691264 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:50.875636+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 56 ms_handle_reset con 0x55bdeb714400 session 0x55bdeb7be540
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 22691840 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:51.875804+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 56 handle_osd_map epochs [56,57], i have 56, src has [1,57]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.403846741s of 10.037183762s, submitted: 116
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 57 ms_handle_reset con 0x55bdeb712400 session 0x55bdec472380
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 632544 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 57 heartbeat osd_stat(store_statfs(0x4fccb9000/0x0/0x4ffc00000, data 0x14adc8e/0x150f000, compress 0x0/0x0/0x0, omap 0x962f, meta 0x1a269d1), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 57 handle_osd_map epochs [57,58], i have 57, src has [1,58]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 22618112 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:52.875935+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 22585344 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb713c00
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:53.876087+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 58 handle_osd_map epochs [58,59], i have 58, src has [1,59]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 22405120 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 59 ms_handle_reset con 0x55bdeb713c00 session 0x55bdeb5c4000
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:54.876200+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 59 handle_osd_map epochs [60,60], i have 59, src has [1,60]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 22462464 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:55.876308+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdec890800
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 60 ms_handle_reset con 0x55bdeb712400 session 0x55bdeac18fc0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 22478848 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 60 handle_osd_map epochs [60,61], i have 60, src has [1,61]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 61 ms_handle_reset con 0x55bdec890800 session 0x55bdeca17880
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:56.876500+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 645765 data_alloc: 218103808 data_used: 252
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 22478848 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 61 heartbeat osd_stat(store_statfs(0x4fccac000/0x0/0x4ffc00000, data 0x14b338d/0x151b000, compress 0x0/0x0/0x0, omap 0xa290, meta 0x1a25d70), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 61 handle_osd_map epochs [62,62], i have 61, src has [1,62]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:57.876649+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdedcb1000
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 21340160 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:58.876829+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 21340160 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:59.876970+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 62 handle_osd_map epochs [63,63], i have 62, src has [1,63]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 63 ms_handle_reset con 0x55bdedcb1000 session 0x55bdeb5c5340
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb713400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 20217856 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:00.877212+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 63 handle_osd_map epochs [63,64], i have 63, src has [1,64]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 64 ms_handle_reset con 0x55bdeb713400 session 0x55bdeb7bec40
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712c00
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 20054016 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:01.877395+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fcca1000/0x0/0x4ffc00000, data 0x14b748e/0x1527000, compress 0x0/0x0/0x0, omap 0xb30b, meta 0x1a24cf5), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 64 handle_osd_map epochs [64,65], i have 64, src has [1,65]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 65 ms_handle_reset con 0x55bdeb712c00 session 0x55bdeb7bfa40
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712c00
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb713400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.239828110s of 10.334854126s, submitted: 135
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 671123 data_alloc: 218103808 data_used: 4313
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 66 ms_handle_reset con 0x55bdeb713400 session 0x55bdeac18a80
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 18907136 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:02.877560+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb318c00
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 66 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 18800640 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319000
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 67 ms_handle_reset con 0x55bdeb318c00 session 0x55bdeaa55180
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:03.877737+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 67 ms_handle_reset con 0x55bdeb319000 session 0x55bdec82c700
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 67 ms_handle_reset con 0x55bdeb712c00 session 0x55bdeac181c0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 67 ms_handle_reset con 0x55bdeb319400 session 0x55bdeb5c4c40
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319000
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 18612224 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:04.877898+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 67 handle_osd_map epochs [67,68], i have 68, src has [1,68]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 68 ms_handle_reset con 0x55bdeb319000 session 0x55bdeb5c48c0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 18554880 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:05.878072+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 18554880 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:06.878219+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fcc8f000/0x0/0x4ffc00000, data 0x14be5dd/0x153b000, compress 0x0/0x0/0x0, omap 0xc633, meta 0x1a239cd), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 688350 data_alloc: 218103808 data_used: 4313
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 18554880 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:07.878361+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb318c00
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 18636800 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:08.878575+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 70 ms_handle_reset con 0x55bdeb318c00 session 0x55bdeb7bfa40
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 18489344 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:09.878697+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 18448384 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 71 ms_handle_reset con 0x55bdeb319400 session 0x55bdeb5c41c0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:10.878868+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 18374656 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb713400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 71 ms_handle_reset con 0x55bdeb713400 session 0x55bdeabc6a80
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:11.879079+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.132811546s of 10.004205704s, submitted: 173
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 72 ms_handle_reset con 0x55bdeb712400 session 0x55bdeb5c5880
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 696007 data_alloc: 218103808 data_used: 4313
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 72 heartbeat osd_stat(store_statfs(0x4fcc8a000/0x0/0x4ffc00000, data 0x14c1831/0x1542000, compress 0x0/0x0/0x0, omap 0xd688, meta 0x1a22978), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 18284544 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:12.879247+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 72 heartbeat osd_stat(store_statfs(0x4fcc86000/0x0/0x4ffc00000, data 0x14c2e1f/0x1543000, compress 0x0/0x0/0x0, omap 0xda0f, meta 0x1a225f1), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 18284544 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:13.879412+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 73 ms_handle_reset con 0x55bdeb712400 session 0x55bdeaa49340
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb318c00
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 73 ms_handle_reset con 0x55bdeb318c00 session 0x55bdeb5c5180
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319000
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 18219008 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:14.879553+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 74 ms_handle_reset con 0x55bdeb319000 session 0x55bdec8581c0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 74 ms_handle_reset con 0x55bdeb319400 session 0x55bdeaafaa80
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb713400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712c00
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 17842176 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:15.879698+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 74 handle_osd_map epochs [74,75], i have 74, src has [1,75]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 75 ms_handle_reset con 0x55bdeb712c00 session 0x55bdeaa54c40
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 17743872 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:16.879912+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 75 handle_osd_map epochs [75,76], i have 75, src has [1,76]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 76 ms_handle_reset con 0x55bdeb713400 session 0x55bdeaafb180
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 706321 data_alloc: 218103808 data_used: 12435
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 17760256 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb318c00
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:17.880204+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319000
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 77 heartbeat osd_stat(store_statfs(0x4fcc7e000/0x0/0x4ffc00000, data 0x14c806a/0x154c000, compress 0x0/0x0/0x0, omap 0xed74, meta 0x1a2128c), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 17760256 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:18.880476+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 17760256 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:19.881238+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 17743872 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:20.881430+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 78 ms_handle_reset con 0x55bdeb319400 session 0x55bdec473880
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 17686528 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:21.881630+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 79 ms_handle_reset con 0x55bdeb712400 session 0x55bdec843880
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 724116 data_alloc: 218103808 data_used: 12517
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 79 handle_osd_map epochs [79,80], i have 79, src has [1,80]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.947123528s of 10.242585182s, submitted: 149
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 17645568 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:22.881873+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdec890800
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 80 heartbeat osd_stat(store_statfs(0x4fbacb000/0x0/0x4ffc00000, data 0x14cd6b4/0x155b000, compress 0x0/0x0/0x0, omap 0xfb85, meta 0x2bc047b), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 80 ms_handle_reset con 0x55bdec890800 session 0x55bdec87c700
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 17522688 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:23.882063+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bded5fbc00
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 80 ms_handle_reset con 0x55bded5fbc00 session 0x55bdeac181c0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 81 ms_handle_reset con 0x55bdeb319400 session 0x55bdec498540
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 81 heartbeat osd_stat(store_statfs(0x4fbacb000/0x0/0x4ffc00000, data 0x14cd6b4/0x155b000, compress 0x0/0x0/0x0, omap 0xfc0f, meta 0x2bc03f1), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 17367040 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:24.882350+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 17227776 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 82 ms_handle_reset con 0x55bdeb712400 session 0x55bdec472540
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:25.882549+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb713400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 83 ms_handle_reset con 0x55bdeb713400 session 0x55bdec87dc00
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:26.882764+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 737140 data_alloc: 218103808 data_used: 12517
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:27.882973+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:28.883111+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:29.883242+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 83 heartbeat osd_stat(store_statfs(0x4fbac6000/0x0/0x4ffc00000, data 0x14d1ced/0x1564000, compress 0x0/0x0/0x0, omap 0x10855, meta 0x2bbf7ab), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread fragmentation_score=0.000134 took=0.001228s
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdec890800
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 83 ms_handle_reset con 0x55bdec890800 session 0x55bdeb6cd880
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bded5fb800
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bded5fb400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:30.883362+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 16850944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 83 ms_handle_reset con 0x55bded5fb400 session 0x55bdeb6cc1c0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 83 ms_handle_reset con 0x55bded5fb800 session 0x55bdeca16a80
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bded5fb400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 83 ms_handle_reset con 0x55bded5fb400 session 0x55bdeca161c0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 83 ms_handle_reset con 0x55bdeb319400 session 0x55bdec82dc00
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 83 ms_handle_reset con 0x55bdeb712400 session 0x55bdeaa54540
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb713400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 83 ms_handle_reset con 0x55bdeb713400 session 0x55bdeaafa700
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 83 ms_handle_reset con 0x55bdeb319400 session 0x55bdeaa55180
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:31.883497+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 17137664 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 83 heartbeat osd_stat(store_statfs(0x4fbac8000/0x0/0x4ffc00000, data 0x14d1ced/0x1564000, compress 0x0/0x0/0x0, omap 0x10aec, meta 0x2bbf514), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 83 ms_handle_reset con 0x55bdeb712400 session 0x55bdeb6cc380
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 737721 data_alloc: 218103808 data_used: 12571
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bded5fb400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.843376160s of 10.071531296s, submitted: 98
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 84 ms_handle_reset con 0x55bded5fb400 session 0x55bdec4981c0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bded5fb800
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 84 ms_handle_reset con 0x55bded5fb800 session 0x55bdec499a40
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:32.883607+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 17154048 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:33.884721+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 17154048 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdec890800
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bded5fb000
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 84 heartbeat osd_stat(store_statfs(0x4fba9e000/0x0/0x4ffc00000, data 0x14f71e5/0x158c000, compress 0x0/0x0/0x0, omap 0x10e75, meta 0x2bbf18b), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:34.884984+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 16932864 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:35.885217+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 16932864 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 84 heartbeat osd_stat(store_statfs(0x4fba9e000/0x0/0x4ffc00000, data 0x14f71e5/0x158c000, compress 0x0/0x0/0x0, omap 0x10e75, meta 0x2bbf18b), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:36.885442+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 16932864 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb714c00
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 84 ms_handle_reset con 0x55bdeb714c00 session 0x55bdeb5c48c0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 745515 data_alloc: 218103808 data_used: 15131
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:37.885646+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 16785408 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 84 heartbeat osd_stat(store_statfs(0x4fbaa0000/0x0/0x4ffc00000, data 0x14f71e5/0x158c000, compress 0x0/0x0/0x0, omap 0x10e75, meta 0x2bbf18b), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 84 handle_osd_map epochs [84,85], i have 85, src has [1,85]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 85 ms_handle_reset con 0x55bdeb319400 session 0x55bdeaafa380
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bded5fb400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 85 ms_handle_reset con 0x55bded5fb400 session 0x55bdec4988c0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 85 ms_handle_reset con 0x55bdeb712400 session 0x55bdec843500
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bded5fb800
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 85 ms_handle_reset con 0x55bded5fb800 session 0x55bdec87ca80
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdecc20c00
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:38.885781+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 15728640 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 86 ms_handle_reset con 0x55bdecc20c00 session 0x55bdec87cfc0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 86 heartbeat osd_stat(store_statfs(0x4fba94000/0x0/0x4ffc00000, data 0x14fa1d3/0x1593000, compress 0x0/0x0/0x0, omap 0x11ae1, meta 0x2bbe51f), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:39.885932+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 15736832 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:40.886097+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 87 heartbeat osd_stat(store_statfs(0x4fba8f000/0x0/0x4ffc00000, data 0x14fb7bc/0x1596000, compress 0x0/0x0/0x0, omap 0x11d7a, meta 0x2bbe286), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:41.886335+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 15638528 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb319400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 758109 data_alloc: 218103808 data_used: 15643
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.955635071s of 10.048162460s, submitted: 84
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 87 ms_handle_reset con 0x55bdeb319400 session 0x55bdeca176c0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:42.886511+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 15630336 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:43.886655+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 15630336 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 88 ms_handle_reset con 0x55bdeb712400 session 0x55bdeb7bfa40
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:44.886812+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 15622144 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 88 ms_handle_reset con 0x55bdec890800 session 0x55bdec87d6c0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 88 ms_handle_reset con 0x55bded5fb000 session 0x55bdeac19880
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bded5fac00
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:45.887027+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 15572992 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 89 ms_handle_reset con 0x55bded5fac00 session 0x55bdeca17dc0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 89 heartbeat osd_stat(store_statfs(0x4fbab5000/0x0/0x4ffc00000, data 0x14da003/0x1575000, compress 0x0/0x0/0x0, omap 0x12554, meta 0x2bbdaac), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:46.888009+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 15572992 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 759964 data_alloc: 218103808 data_used: 14173
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:47.888398+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 15564800 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 90 ms_handle_reset con 0x55bdeb318c00 session 0x55bdec87ddc0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 90 ms_handle_reset con 0x55bdeb319000 session 0x55bdeb5c5180
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:48.888523+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 15564800 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdecc21c00
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 90 ms_handle_reset con 0x55bdecc21c00 session 0x55bdec86b6c0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:49.888744+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 15564800 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:50.889198+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdeb712400
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 15564800 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 90 ms_handle_reset con 0x55bdeb712400 session 0x55bdeaafbdc0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdec890800
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:51.889695+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 15564800 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 91 ms_handle_reset con 0x55bdec890800 session 0x55bdeb7be700
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 91 heartbeat osd_stat(store_statfs(0x4fbab6000/0x0/0x4ffc00000, data 0x14db4e7/0x1576000, compress 0x0/0x0/0x0, omap 0x1293b, meta 0x2bbd6c5), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 763859 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:52.889832+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:53.890192+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:54.890489+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 92 heartbeat osd_stat(store_statfs(0x4fbaad000/0x0/0x4ffc00000, data 0x14ddfa2/0x157b000, compress 0x0/0x0/0x0, omap 0x12e65, meta 0x2bbd19b), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:55.890708+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 92 heartbeat osd_stat(store_statfs(0x4fbaad000/0x0/0x4ffc00000, data 0x14ddfa2/0x157b000, compress 0x0/0x0/0x0, omap 0x12e65, meta 0x2bbd19b), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:56.891045+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.702639580s of 14.894862175s, submitted: 114
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:57.891261+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:58.891544+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:59.891780+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:00.891983+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:01.892233+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:02.892395+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:03.892552+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _renew_subs
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:04.892718+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:05.892856+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:06.893090+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:07.893383+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:08.893573+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 15679488 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:09.893812+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:10.894194+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:11.894434+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:12.894621+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:13.894799+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:14.894929+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:15.895094+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:16.895337+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:17.895498+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:18.895664+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:19.895848+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:20.896001+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:21.896111+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:22.896274+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:23.896375+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:24.896502+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:25.896621+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:26.896799+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:27.896939+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:28.897068+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:29.897244+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:30.897406+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:31.897556+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:32.897674+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:33.897805+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:34.897939+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:35.898069+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:36.898207+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:37.898356+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:38.898475+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:39.898674+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:40.898799+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:41.898990+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:42.899198+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:43.899333+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:44.899493+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:45.899659+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:46.899802+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:47.900025+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:48.900180+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:49.900294+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:50.900446+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:51.900626+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:52.900816+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:53.900983+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:54.901157+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:55.901377+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:56.901529+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:57.901710+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:58.901881+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:59.902011+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:00.902158+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:01.902285+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:02.902416+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:03.902612+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:04.902742+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:05.902884+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:06.903011+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 15671296 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:07.903123+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 15540224 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'config diff' '{prefix=config diff}'
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'config show' '{prefix=config show}'
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'counter dump' '{prefix=counter dump}'
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:08.903271+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'counter schema' '{prefix=counter schema}'
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 15155200 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:09.903388+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 15081472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'log dump' '{prefix=log dump}'
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:10.903554+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 26124288 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'perf dump' '{prefix=perf dump}'
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'perf schema' '{prefix=perf schema}'
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:11.903694+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:12.903828+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:13.903978+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:14.904200+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:15.904325+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:16.904463+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:17.904595+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:18.904717+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:19.904873+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:20.905041+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:21.905988+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:22.906109+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:23.906224+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:24.906351+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:25.906472+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:26.906629+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:27.906756+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:28.906898+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:29.907070+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:30.907248+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:31.907472+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:32.907640+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:33.907791+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:34.907939+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:35.908126+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:36.908322+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:37.908469+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:38.908623+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:39.908780+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:40.908939+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:41.909123+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:42.909283+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:43.909427+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:44.909572+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:45.909859+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:46.910064+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:47.910259+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:48.910439+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:49.910651+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:50.910858+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:51.911084+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:52.911292+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:53.911534+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:54.911682+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:55.911848+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:56.912038+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:57.912226+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:58.912452+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:59.912629+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:00.912785+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:01.912999+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:02.913248+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 25747456 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:03.913404+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 25747456 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:04.913557+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 25747456 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:05.913752+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 25747456 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:06.913963+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 25747456 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:07.914188+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 25747456 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:08.914378+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 25747456 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:09.914540+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 25747456 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:10.914755+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 25747456 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:11.914979+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 25747456 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:12.915261+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 25747456 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:13.915380+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 25747456 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:14.915512+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 25747456 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:15.915693+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 25747456 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:16.915877+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 25739264 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:17.916023+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 25739264 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:18.916258+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 25739264 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:19.916425+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 25739264 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:20.916600+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 25739264 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:21.916798+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 25739264 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:22.916939+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 25739264 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:23.917123+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 25739264 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:24.917373+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 25739264 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:25.917546+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 25739264 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:26.917714+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 25731072 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:27.917860+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 25731072 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:28.918037+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 25731072 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:29.918212+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 25731072 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:30.918363+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 25731072 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:31.918561+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 25731072 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:32.918712+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 25731072 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:33.918866+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 25731072 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:34.919044+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 25731072 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:35.919193+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:36.919318+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:37.919450+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:38.919614+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:39.919825+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:40.919971+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:41.920200+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:42.920327+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:43.920463+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:44.920590+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:45.920795+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:46.920917+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:47.921038+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:48.921198+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:49.921351+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:50.921539+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:51.921696+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:52.921825+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:53.922025+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 25714688 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:54.922198+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 25714688 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:55.922336+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 25714688 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:56.922492+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 25714688 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:57.922621+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 25714688 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:58.922758+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 25714688 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:59.922906+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 25714688 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:00.923078+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 25714688 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:01.923384+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 25714688 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:02.923587+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 25714688 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:03.923764+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 25714688 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:04.923900+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 25714688 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:05.924061+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 25714688 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:06.924233+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 25714688 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:07.924403+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 25714688 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:08.924543+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 25714688 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:09.924751+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:10.924894+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:11.925182+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:12.925350+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:13.925607+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:14.925824+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:15.925979+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:16.926120+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:17.926305+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:18.926455+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:19.926608+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:20.926755+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:21.926980+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:22.927153+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:23.927300+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:24.927466+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:25.927662+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:26.927837+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:27.928021+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:28.929160+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:29.929341+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:30.929475+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:31.929700+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:32.929957+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:33.930186+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:34.930366+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:35.930514+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:36.930689+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:37.930866+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:38.931022+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:39.931209+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:40.931361+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:41.931529+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:42.931744+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 25690112 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:43.931968+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 25690112 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:44.932224+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 25690112 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:45.932363+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 25690112 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:46.932558+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 25690112 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:47.932733+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 25690112 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:48.932913+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 25690112 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:49.933111+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 25690112 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:50.933283+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 25690112 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:51.933463+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 25690112 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:52.933614+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 25690112 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:53.933752+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 25690112 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:54.933915+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 25821184 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:55.934247+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 25821184 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:56.934412+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 25821184 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:57.934536+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 25821184 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:58.934914+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 25821184 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:59.935157+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 25821184 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:00.935488+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 25821184 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:01.935811+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 25812992 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:02.936170+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 25812992 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:03.936442+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 25812992 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:04.936712+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 25812992 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:05.936939+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 25812992 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:06.937112+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 25812992 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:07.937261+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 25812992 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:08.937395+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 25812992 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:09.937634+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 25804800 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:10.937874+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 25804800 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:11.938180+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 25804800 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:12.938371+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 25804800 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:13.938584+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 25804800 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:14.938800+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 25804800 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:15.938969+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 25796608 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:16.954691+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 25796608 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:17.954860+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 25796608 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:18.955020+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 25796608 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:19.955299+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 25796608 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:20.955468+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 25796608 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:21.955663+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 25796608 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:22.955811+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 25796608 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:23.955963+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 25796608 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:24.956169+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 25796608 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:25.956325+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 25796608 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:26.956470+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 25796608 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:27.956632+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 25796608 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:28.956774+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 25796608 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:29.956955+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 25796608 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:30.957213+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 25796608 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:31.957459+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 25788416 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:32.957829+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 25788416 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:33.958004+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 25788416 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:34.958185+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 25788416 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:35.958373+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 25788416 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:36.958514+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 25788416 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:37.958667+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 25788416 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:38.958851+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 25788416 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:39.959024+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 25788416 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:40.959206+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 25788416 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:41.959412+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 25788416 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:42.959611+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 25788416 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:43.959801+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 25780224 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:44.959949+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 25780224 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:45.960113+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 25780224 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:46.960307+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 25780224 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:47.960483+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 25780224 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:48.960660+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 25780224 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:49.960854+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 25780224 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:50.961248+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 25780224 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:51.961515+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 25780224 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:52.961654+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 25780224 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:53.961787+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 25780224 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:54.961936+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 25780224 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:55.962102+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 25772032 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:56.962287+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 25772032 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:57.962520+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 25772032 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:58.962668+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 25772032 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:59.962804+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 25772032 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:00.962941+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 25772032 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:01.963121+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 25772032 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:02.963301+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 25772032 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:03.963416+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 25772032 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:04.963580+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 25772032 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:05.963723+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 25772032 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:06.963854+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 25772032 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:07.963998+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 25772032 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:08.964129+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 25763840 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:09.964304+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 25763840 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:10.964484+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:11.964687+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:12.964860+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:13.965026+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:14.965157+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:15.965297+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:16.965459+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:17.965620+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:18.965768+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:19.965943+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:20.966065+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:21.966228+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:22.966395+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:23.966558+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:24.966706+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:25.966872+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:26.967093+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:27.967211+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:28.967403+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 25755648 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:29.967609+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 25747456 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:30.967796+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 25747456 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:31.968035+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 25747456 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:32.968234+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 25747456 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:33.968417+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 25739264 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:34.968580+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 25739264 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:35.968842+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 25739264 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:36.969070+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 25739264 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:37.969255+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 25739264 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:38.969443+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 25739264 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:39.969742+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:40.969908+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:41.970086+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:42.970238+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:43.970428+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:44.970604+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:45.970808+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:46.970975+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:47.971263+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:48.971431+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:49.971622+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:50.971754+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 25722880 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:51.971908+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 25714688 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:52.972056+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 25714688 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:53.972236+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:54.972388+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:55.972610+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:56.972741+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:57.972925+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:58.973107+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:59.973281+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:00.973427+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:01.973664+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:02.973829+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 25706496 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:03.974000+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:04.974154+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:05.974311+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:06.974429+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:07.974597+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:08.974771+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:09.974924+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:10.975117+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:11.975357+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:12.975471+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:13.975666+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:14.975779+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:15.976035+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 25698304 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:16.976199+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 25690112 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:17.976370+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 25690112 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:18.976552+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 25681920 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:19.976673+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 25681920 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:20.976793+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 25681920 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:21.976967+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 25681920 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:22.977108+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 25681920 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:23.977252+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 25681920 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:24.977385+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 25673728 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:25.977590+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 25673728 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:26.977793+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 25673728 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:27.977935+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 25665536 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:28.978061+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 25665536 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:29.978231+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 25665536 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:30.978441+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 25665536 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:31.978635+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 25665536 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:32.978844+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 25665536 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:33.979025+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 25665536 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:34.979209+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 25665536 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:35.979392+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 25657344 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:36.979551+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 25657344 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:37.979671+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 25657344 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:38.979804+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 25649152 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:39.980031+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 25649152 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:40.980224+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 25649152 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:41.980469+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 25649152 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:42.980697+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 25649152 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:43.980938+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 25649152 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:44.981240+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 25649152 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:45.981492+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 25649152 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:46.981695+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:47.981857+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 25649152 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:48.982068+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 25649152 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:49.982258+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 25649152 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:50.982492+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 25649152 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:51.982699+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 25649152 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:52.982886+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 25640960 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:53.983034+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 25632768 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:54.983198+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 25632768 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:55.983400+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 25632768 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:56.983574+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 25632768 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:57.983726+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 25632768 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:58.983970+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 25632768 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:59.984290+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 25632768 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:00.984464+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 25632768 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:01.984688+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 25632768 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:02.984889+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 25632768 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:03.985100+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 25624576 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:04.985331+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 25624576 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:05.985548+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 25624576 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:06.985796+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 25624576 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:07.985960+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 25624576 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:08.986106+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 25616384 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:09.986844+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 25616384 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:10.987012+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 25616384 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:11.987215+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 25616384 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:12.987348+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 25616384 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:13.987561+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 25616384 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:14.987719+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 25616384 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:15.987928+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 25616384 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:16.988083+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 25616384 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:17.988214+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 25616384 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:18.988414+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 25616384 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:19.988646+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 25608192 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:20.988833+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 25608192 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:21.989085+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 25608192 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:22.989263+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 25608192 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:23.989510+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 25600000 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:24.989698+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 25600000 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:25.989887+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 25600000 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:26.990026+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 25600000 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:27.990200+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 25600000 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:28.990423+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 25600000 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:29.990671+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 25600000 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:30.990843+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 25600000 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:31.991024+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 25600000 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:32.991285+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 25600000 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:33.991493+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 25591808 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:34.991675+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 25591808 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:35.991873+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 25591808 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:36.992060+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 25591808 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:37.992240+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 25583616 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:38.992445+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 25583616 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:39.992645+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 25583616 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:40.992854+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 25583616 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:41.993042+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 25583616 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:42.993246+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 25583616 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:43.993457+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 25583616 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:44.993615+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 25583616 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:45.993822+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 25583616 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:46.993997+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 25583616 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:47.994204+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:48.994359+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:49.994485+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:50.994649+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:51.994838+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:52.995014+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:53.995196+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:54.995349+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:55.995578+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:56.995742+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:57.996013+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:58.996203+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:59.996370+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:00.996494+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:01.996672+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:02.996907+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:03.997259+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:04.997481+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:05.997660+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:06.997828+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:07.998055+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:08.998223+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 25575424 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:09.998453+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 25567232 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:10.998612+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 25567232 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:11.998775+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 25567232 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:12.998974+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 25567232 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:13.999237+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 25567232 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:14.999396+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 25567232 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:15.999556+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 25567232 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:16.999711+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 25567232 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:17.999915+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 25567232 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:19.000076+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 25567232 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:20.000208+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 25567232 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:21.000324+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 25567232 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:22.000531+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 25567232 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:23.000689+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 25567232 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:24.000846+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 25567232 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:25.001023+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 25567232 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:26.001189+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 25567232 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:27.001345+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25559040 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:28.001519+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25559040 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:29.001664+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25559040 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:30.001814+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:31.001939+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:32.002201+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:33.002357+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:34.002547+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:35.002680+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:36.002862+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:37.003047+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1801.6 total, 600.0 interval
                                           Cumulative writes: 5841 writes, 24K keys, 5841 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5841 writes, 1098 syncs, 5.32 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1592 writes, 4293 keys, 1592 commit groups, 1.0 writes per commit group, ingest: 2.20 MB, 0.00 MB/s
                                           Interval WAL: 1592 writes, 710 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:38.003223+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:39.003387+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:40.003504+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:41.003664+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:42.003819+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:43.003951+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:44.004115+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:45.004284+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:46.004516+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:47.004655+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:48.004872+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:49.005023+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:50.005189+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:51.005362+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:52.005838+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:53.006021+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:54.006215+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 25550848 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:55.006406+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 25542656 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:56.006564+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 25534464 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:57.006745+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 25534464 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:58.006889+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 25534464 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:59.007213+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 25534464 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:00.007367+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 25534464 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:01.007574+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 25534464 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:02.007782+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 25534464 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:03.007908+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 25534464 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:04.008065+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 25534464 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:05.008268+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 25526272 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:06.008456+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 25526272 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:07.008655+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 25526272 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:08.008777+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 25526272 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:09.008955+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 25518080 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:10.009090+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 25518080 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:11.009238+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 25518080 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:12.009431+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 25518080 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:13.009643+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 25518080 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:14.009881+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 25518080 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:15.010068+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 25518080 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:16.010281+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 25518080 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:17.010528+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 25518080 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:18.010735+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 25518080 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:19.010975+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 25518080 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:20.011190+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 25518080 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:21.011355+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:22.011532+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:23.011655+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:24.011785+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:25.011934+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:26.012062+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:27.012178+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:28.012319+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:29.012533+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:30.012716+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:31.012878+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:32.013077+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:33.013243+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:34.013384+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:35.013560+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:36.013815+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 ms_handle_reset con 0x55bdeb715800 session 0x55bdeaa54000
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: handle_auth_request added challenge on 0x55bdec890800
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:37.014034+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:38.014204+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:39.014344+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:40.014483+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:41.014666+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:42.014839+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:43.014999+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:44.015121+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:45.015479+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:46.015645+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:47.015759+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:48.015977+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:49.016183+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:50.016402+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:51.016526+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:52.016705+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:53.016941+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:54.017081+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:55.017285+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:56.017406+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:57.017554+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:58.017696+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:59.017848+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:00.018004+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:01.018128+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:02.018290+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:03.018510+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:04.018650+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 25501696 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'config diff' '{prefix=config diff}'
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'config show' '{prefix=config show}'
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'counter dump' '{prefix=counter dump}'
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:37 compute-0 ceph-osd[88193]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:37 compute-0 ceph-osd[88193]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766167 data_alloc: 218103808 data_used: 14091
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:05.018884+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 25509888 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'counter schema' '{prefix=counter schema}'
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: tick
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_tickets
Jan 29 09:44:37 compute-0 ceph-osd[88193]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:06.019043+0000)
Jan 29 09:44:37 compute-0 ceph-osd[88193]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 25477120 heap: 102277120 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:37 compute-0 ceph-osd[88193]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbaac000/0x0/0x4ffc00000, data 0x14df452/0x157e000, compress 0x0/0x0/0x0, omap 0x1319b, meta 0x2bbce65), peers [0,1] op hist [])
Jan 29 09:44:37 compute-0 ceph-osd[88193]: do_command 'log dump' '{prefix=log dump}'
Jan 29 09:44:37 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15062 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:37 compute-0 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 09:44:37 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 29 09:44:37 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3247605354' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 29 09:44:37 compute-0 ceph-mon[75183]: from='client.15052 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:37 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3359693719' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 29 09:44:37 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/294022747' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 29 09:44:37 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15064 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 29 09:44:38 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/74756313' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 29 09:44:38 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15068 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:38 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 29 09:44:38 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4267780432' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 29 09:44:38 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15072 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:38 compute-0 ceph-mon[75183]: from='client.15058 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:38 compute-0 ceph-mon[75183]: pgmap v1064: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:38 compute-0 ceph-mon[75183]: from='client.15062 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:38 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3247605354' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 29 09:44:38 compute-0 ceph-mon[75183]: from='client.15064 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:38 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/74756313' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 29 09:44:38 compute-0 ceph-mon[75183]: from='client.15068 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:38 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/4267780432' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 29 09:44:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1065: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:39 compute-0 crontab[257495]: (root) LIST (root)
Jan 29 09:44:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:44:39 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15076 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 29 09:44:39 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2289966809' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 29 09:44:39 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15078 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:39 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 29 09:44:39 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1645835490' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 29 09:44:39 compute-0 ceph-mon[75183]: from='client.15072 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:39 compute-0 ceph-mon[75183]: pgmap v1065: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:39 compute-0 ceph-mon[75183]: from='client.15076 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:39 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2289966809' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 29 09:44:39 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1645835490' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 29 09:44:40 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15082 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:40 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 29 09:44:40 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3644000534' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 29 09:44:40 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15086 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:40 compute-0 ceph-mon[75183]: from='client.15078 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:40 compute-0 ceph-mon[75183]: from='client.15082 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:40 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3644000534' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 29 09:44:40 compute-0 ceph-mon[75183]: from='client.15086 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:40 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 29 09:44:40 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2156848985' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 29 09:44:40 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15090 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1066: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.17( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.300786 1 0.000013
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.17( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.301902 0 0.000000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.17( empty lb MIN local-lis/les=39/40 n=0 ec=39/21 lis/c=39/39 les/c/f=40/40/0 sis=45) [0] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.339081 0 0.000000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:38.174800+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 1966080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 355294 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:39.174949+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 1867776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:40.175090+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 7 sent 5 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:09.463983+0000 osd.1 (osd.1) 6 : cluster [DBG] 7.1e scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:09.474598+0000 osd.1 (osd.1) 7 : cluster [DBG] 7.1e scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 1859584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 7)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:09.463983+0000 osd.1 (osd.1) 6 : cluster [DBG] 7.1e scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:09.474598+0000 osd.1 (osd.1) 7 : cluster [DBG] 7.1e scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:41.175324+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 1859584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e4000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:42.175500+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 1859584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e4000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e4000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:43.175691+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 1859584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 359227 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:44.175892+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 1900544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:45.176048+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 1900544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:46.176213+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 1892352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:47.176408+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 1892352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.815997124s of 10.003100395s, submitted: 235
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e4000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:48.176580+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 9 sent 7 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:17.558006+0000 osd.1 (osd.1) 8 : cluster [DBG] 7.1d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:17.568339+0000 osd.1 (osd.1) 9 : cluster [DBG] 7.1d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 1974272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 360776 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 9)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:17.558006+0000 osd.1 (osd.1) 8 : cluster [DBG] 7.1d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:17.568339+0000 osd.1 (osd.1) 9 : cluster [DBG] 7.1d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:49.176858+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 11 sent 9 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:18.541437+0000 osd.1 (osd.1) 10 : cluster [DBG] 3.19 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:18.551993+0000 osd.1 (osd.1) 11 : cluster [DBG] 3.19 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 1892352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 11)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:18.541437+0000 osd.1 (osd.1) 10 : cluster [DBG] 3.19 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:18.551993+0000 osd.1 (osd.1) 11 : cluster [DBG] 3.19 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:50.177056+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 1892352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:51.177392+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 13 sent 11 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:20.495690+0000 osd.1 (osd.1) 12 : cluster [DBG] 3.1a scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:20.506253+0000 osd.1 (osd.1) 13 : cluster [DBG] 3.1a scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 1884160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 13)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:20.495690+0000 osd.1 (osd.1) 12 : cluster [DBG] 3.1a scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:20.506253+0000 osd.1 (osd.1) 13 : cluster [DBG] 3.1a scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:52.178354+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 1884160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:53.179230+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 1875968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 368015 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:54.179898+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 15 sent 13 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:23.458074+0000 osd.1 (osd.1) 14 : cluster [DBG] 7.12 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:23.468678+0000 osd.1 (osd.1) 15 : cluster [DBG] 7.12 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 1875968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 15)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:23.458074+0000 osd.1 (osd.1) 14 : cluster [DBG] 7.12 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:23.468678+0000 osd.1 (osd.1) 15 : cluster [DBG] 7.12 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:55.180556+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 1867776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:56.181194+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 1867776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:57.181710+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 1859584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:58.182203+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 1859584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 368015 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:59.182631+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.853828430s of 11.928766251s, submitted: 7
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 1818624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:00.183028+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 17 sent 15 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:29.490645+0000 osd.1 (osd.1) 16 : cluster [DBG] 3.14 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:29.501275+0000 osd.1 (osd.1) 17 : cluster [DBG] 3.14 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 17)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:29.490645+0000 osd.1 (osd.1) 16 : cluster [DBG] 3.14 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:29.501275+0000 osd.1 (osd.1) 17 : cluster [DBG] 3.14 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 1810432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:01.183445+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 1794048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:02.183863+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 19 sent 17 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:31.426611+0000 osd.1 (osd.1) 18 : cluster [DBG] 7.10 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:31.436903+0000 osd.1 (osd.1) 19 : cluster [DBG] 7.10 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 19)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:31.426611+0000 osd.1 (osd.1) 18 : cluster [DBG] 7.10 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:31.436903+0000 osd.1 (osd.1) 19 : cluster [DBG] 7.10 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 1794048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:03.184236+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 21 sent 19 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:32.379396+0000 osd.1 (osd.1) 20 : cluster [DBG] 3.13 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:32.389831+0000 osd.1 (osd.1) 21 : cluster [DBG] 3.13 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 21)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:32.379396+0000 osd.1 (osd.1) 20 : cluster [DBG] 3.13 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:32.389831+0000 osd.1 (osd.1) 21 : cluster [DBG] 3.13 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 1794048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 375254 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:04.184559+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 1777664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:05.185174+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 1777664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:06.185403+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 1761280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:07.185655+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 23 sent 21 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:36.348071+0000 osd.1 (osd.1) 22 : cluster [DBG] 7.17 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:36.358654+0000 osd.1 (osd.1) 23 : cluster [DBG] 7.17 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 1753088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 23)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:36.348071+0000 osd.1 (osd.1) 22 : cluster [DBG] 7.17 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:36.358654+0000 osd.1 (osd.1) 23 : cluster [DBG] 7.17 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:08.185909+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 1753088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 380080 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:09.186245+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 25 sent 23 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:38.367906+0000 osd.1 (osd.1) 24 : cluster [DBG] 7.16 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:38.378469+0000 osd.1 (osd.1) 25 : cluster [DBG] 7.16 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 1728512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 25)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:38.367906+0000 osd.1 (osd.1) 24 : cluster [DBG] 7.16 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:38.378469+0000 osd.1 (osd.1) 25 : cluster [DBG] 7.16 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:10.186548+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.851775169s of 10.886636734s, submitted: 10
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 1720320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:11.186776+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:40.377210+0000 osd.1 (osd.1) 26 : cluster [DBG] 3.10 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:40.387760+0000 osd.1 (osd.1) 27 : cluster [DBG] 3.10 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 1712128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 27)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:40.377210+0000 osd.1 (osd.1) 26 : cluster [DBG] 3.10 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:40.387760+0000 osd.1 (osd.1) 27 : cluster [DBG] 3.10 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:12.187047+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 1703936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:13.187334+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:42.376437+0000 osd.1 (osd.1) 28 : cluster [DBG] 7.14 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:42.387084+0000 osd.1 (osd.1) 29 : cluster [DBG] 7.14 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 1703936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 384906 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 29)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:42.376437+0000 osd.1 (osd.1) 28 : cluster [DBG] 7.14 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:42.387084+0000 osd.1 (osd.1) 29 : cluster [DBG] 7.14 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:14.187712+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 1695744 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:15.187963+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 1695744 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:16.188287+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:45.334750+0000 osd.1 (osd.1) 30 : cluster [DBG] 7.b scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:45.345812+0000 osd.1 (osd.1) 31 : cluster [DBG] 7.b scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 1679360 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 31)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:45.334750+0000 osd.1 (osd.1) 30 : cluster [DBG] 7.b scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:45.345812+0000 osd.1 (osd.1) 31 : cluster [DBG] 7.b scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:17.188523+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:46.374682+0000 osd.1 (osd.1) 32 : cluster [DBG] 3.d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:46.385819+0000 osd.1 (osd.1) 33 : cluster [DBG] 3.d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 1671168 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:18.188736+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 33)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:46.374682+0000 osd.1 (osd.1) 32 : cluster [DBG] 3.d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:46.385819+0000 osd.1 (osd.1) 33 : cluster [DBG] 3.d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 1662976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 392139 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:19.188999+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:48.408754+0000 osd.1 (osd.1) 34 : cluster [DBG] 3.b scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:48.419342+0000 osd.1 (osd.1) 35 : cluster [DBG] 3.b scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 1662976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 35)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:48.408754+0000 osd.1 (osd.1) 34 : cluster [DBG] 3.b scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:48.419342+0000 osd.1 (osd.1) 35 : cluster [DBG] 3.b scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:20.189317+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 1662976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:21.189613+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.849021912s of 11.102084160s, submitted: 10
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 1630208 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:22.189839+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:51.479984+0000 osd.1 (osd.1) 36 : cluster [DBG] 3.2 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:51.490623+0000 osd.1 (osd.1) 37 : cluster [DBG] 3.2 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 1630208 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 37)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:51.479984+0000 osd.1 (osd.1) 36 : cluster [DBG] 3.2 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:51.490623+0000 osd.1 (osd.1) 37 : cluster [DBG] 3.2 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:23.190096+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 1622016 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 394550 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:24.190353+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 1622016 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:25.190583+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:54.418866+0000 osd.1 (osd.1) 38 : cluster [DBG] 3.0 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:54.429403+0000 osd.1 (osd.1) 39 : cluster [DBG] 3.0 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 39)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:54.418866+0000 osd.1 (osd.1) 38 : cluster [DBG] 3.0 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:54.429403+0000 osd.1 (osd.1) 39 : cluster [DBG] 3.0 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 1613824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:26.191023+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 1613824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:27.191280+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:56.421932+0000 osd.1 (osd.1) 40 : cluster [DBG] 7.0 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:56.431550+0000 osd.1 (osd.1) 41 : cluster [DBG] 7.0 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 41)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:56.421932+0000 osd.1 (osd.1) 40 : cluster [DBG] 7.0 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:56.431550+0000 osd.1 (osd.1) 41 : cluster [DBG] 7.0 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 1605632 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:28.191585+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:57.394369+0000 osd.1 (osd.1) 42 : cluster [DBG] 3.4 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:57.404935+0000 osd.1 (osd.1) 43 : cluster [DBG] 3.4 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 43)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:57.394369+0000 osd.1 (osd.1) 42 : cluster [DBG] 3.4 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:57.404935+0000 osd.1 (osd.1) 43 : cluster [DBG] 3.4 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 1605632 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 401783 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:29.191881+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 1597440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:30.192106+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:59.356990+0000 osd.1 (osd.1) 44 : cluster [DBG] 7.7 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:14:59.367685+0000 osd.1 (osd.1) 45 : cluster [DBG] 7.7 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 45)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:59.356990+0000 osd.1 (osd.1) 44 : cluster [DBG] 7.7 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:14:59.367685+0000 osd.1 (osd.1) 45 : cluster [DBG] 7.7 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 1597440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:31.192449+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 1589248 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:32.192701+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 1589248 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:33.192916+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 1581056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 404194 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:34.193116+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 1581056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:35.193387+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 1572864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:36.193623+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.730809212s of 14.968996048s, submitted: 10
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 1572864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:37.193772+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:06.448866+0000 osd.1 (osd.1) 46 : cluster [DBG] 7.d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:06.459234+0000 osd.1 (osd.1) 47 : cluster [DBG] 7.d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 47)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:06.448866+0000 osd.1 (osd.1) 46 : cluster [DBG] 7.d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:06.459234+0000 osd.1 (osd.1) 47 : cluster [DBG] 7.d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 1556480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:38.193987+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:07.450788+0000 osd.1 (osd.1) 48 : cluster [DBG] 3.1c scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:07.461526+0000 osd.1 (osd.1) 49 : cluster [DBG] 3.1c scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 49)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:07.450788+0000 osd.1 (osd.1) 48 : cluster [DBG] 3.1c scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:07.461526+0000 osd.1 (osd.1) 49 : cluster [DBG] 3.1c scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 1548288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 409018 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:39.194260+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 1548288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:40.194448+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 1523712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:41.194620+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:10.439508+0000 osd.1 (osd.1) 50 : cluster [DBG] 7.19 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:10.450060+0000 osd.1 (osd.1) 51 : cluster [DBG] 7.19 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 1523712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 51)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:10.439508+0000 osd.1 (osd.1) 50 : cluster [DBG] 7.19 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:10.450060+0000 osd.1 (osd.1) 51 : cluster [DBG] 7.19 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:42.194877+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 1515520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:43.195102+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 1515520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 411431 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:44.195357+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 1507328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:45.195639+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 1499136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:46.195880+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.676377296s of 10.012275696s, submitted: 7
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 1490944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:47.196188+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:16.375038+0000 osd.1 (osd.1) 52 : cluster [DBG] 4.d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:16.385602+0000 osd.1 (osd.1) 53 : cluster [DBG] 4.d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 1482752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 53)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:16.375038+0000 osd.1 (osd.1) 52 : cluster [DBG] 4.d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:16.385602+0000 osd.1 (osd.1) 53 : cluster [DBG] 4.d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:48.196541+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 1482752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 413842 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:49.196932+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 1474560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:50.197467+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 1466368 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:51.197756+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:20.408859+0000 osd.1 (osd.1) 54 : cluster [DBG] 6.c scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:20.419297+0000 osd.1 (osd.1) 55 : cluster [DBG] 6.c scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 1458176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 55)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:20.408859+0000 osd.1 (osd.1) 54 : cluster [DBG] 6.c scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:20.419297+0000 osd.1 (osd.1) 55 : cluster [DBG] 6.c scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:52.198147+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:21.369087+0000 osd.1 (osd.1) 56 : cluster [DBG] 6.1e scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:21.379691+0000 osd.1 (osd.1) 57 : cluster [DBG] 6.1e scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 1458176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 57)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:21.369087+0000 osd.1 (osd.1) 56 : cluster [DBG] 6.1e scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:21.379691+0000 osd.1 (osd.1) 57 : cluster [DBG] 6.1e scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:53.198684+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:22.366053+0000 osd.1 (osd.1) 58 : cluster [DBG] 6.d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:22.380164+0000 osd.1 (osd.1) 59 : cluster [DBG] 6.d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 1449984 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 423488 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 59)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:22.366053+0000 osd.1 (osd.1) 58 : cluster [DBG] 6.d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:22.380164+0000 osd.1 (osd.1) 59 : cluster [DBG] 6.d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:54.198920+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:23.407331+0000 osd.1 (osd.1) 60 : cluster [DBG] 6.6 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:23.421672+0000 osd.1 (osd.1) 61 : cluster [DBG] 6.6 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 1441792 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 61)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:23.407331+0000 osd.1 (osd.1) 60 : cluster [DBG] 6.6 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:23.421672+0000 osd.1 (osd.1) 61 : cluster [DBG] 6.6 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:55.199244+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:24.414103+0000 osd.1 (osd.1) 62 : cluster [DBG] 4.4 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:24.424684+0000 osd.1 (osd.1) 63 : cluster [DBG] 4.4 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 1433600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 63)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:24.414103+0000 osd.1 (osd.1) 62 : cluster [DBG] 4.4 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:24.424684+0000 osd.1 (osd.1) 63 : cluster [DBG] 4.4 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:56.199480+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:25.378646+0000 osd.1 (osd.1) 64 : cluster [DBG] 6.4 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:25.389016+0000 osd.1 (osd.1) 65 : cluster [DBG] 6.4 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 1433600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 65)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:25.378646+0000 osd.1 (osd.1) 64 : cluster [DBG] 6.4 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:25.389016+0000 osd.1 (osd.1) 65 : cluster [DBG] 6.4 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:57.199719+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:26.341676+0000 osd.1 (osd.1) 66 : cluster [DBG] 6.1 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:26.352225+0000 osd.1 (osd.1) 67 : cluster [DBG] 6.1 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 1425408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 67)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:26.341676+0000 osd.1 (osd.1) 66 : cluster [DBG] 6.1 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:26.352225+0000 osd.1 (osd.1) 67 : cluster [DBG] 6.1 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:58.200020+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 1425408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430721 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:59.200249+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 1417216 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:00.200544+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 1409024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:01.200689+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 1400832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:02.200850+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 1400832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:03.201052+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.769033432s of 16.953893661s, submitted: 15
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 1400832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 433132 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:04.201251+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:33.415251+0000 osd.1 (osd.1) 68 : cluster [DBG] 4.7 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:33.425703+0000 osd.1 (osd.1) 69 : cluster [DBG] 4.7 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 69)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:33.415251+0000 osd.1 (osd.1) 68 : cluster [DBG] 4.7 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:33.425703+0000 osd.1 (osd.1) 69 : cluster [DBG] 4.7 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 1384448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:05.201613+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 1384448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:06.201849+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 1376256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:07.202030+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 1376256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:08.202281+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 1368064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 433132 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:09.202473+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 1351680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:10.202692+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:39.524338+0000 osd.1 (osd.1) 70 : cluster [DBG] 4.5 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:39.534814+0000 osd.1 (osd.1) 71 : cluster [DBG] 4.5 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 71)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:39.524338+0000 osd.1 (osd.1) 70 : cluster [DBG] 4.5 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:39.534814+0000 osd.1 (osd.1) 71 : cluster [DBG] 4.5 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 1343488 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:11.203076+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 1335296 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:12.203365+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 1335296 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:13.203514+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.903487206s of 10.152817726s, submitted: 4
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 1327104 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 437954 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:14.203737+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:43.568065+0000 osd.1 (osd.1) 72 : cluster [DBG] 6.e scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:43.582292+0000 osd.1 (osd.1) 73 : cluster [DBG] 6.e scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 73)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:43.568065+0000 osd.1 (osd.1) 72 : cluster [DBG] 6.e scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:43.582292+0000 osd.1 (osd.1) 73 : cluster [DBG] 6.e scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 1310720 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:15.204031+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 1302528 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:16.204207+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 1294336 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:17.204375+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:46.605097+0000 osd.1 (osd.1) 74 : cluster [DBG] 4.f scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:46.615788+0000 osd.1 (osd.1) 75 : cluster [DBG] 4.f scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 75)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:46.605097+0000 osd.1 (osd.1) 74 : cluster [DBG] 4.f scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:46.615788+0000 osd.1 (osd.1) 75 : cluster [DBG] 4.f scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 1294336 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:18.204661+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 1253376 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 442776 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:19.204970+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:48.686742+0000 osd.1 (osd.1) 76 : cluster [DBG] 6.b scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:48.700922+0000 osd.1 (osd.1) 77 : cluster [DBG] 6.b scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 1245184 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 77)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:48.686742+0000 osd.1 (osd.1) 76 : cluster [DBG] 6.b scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:48.700922+0000 osd.1 (osd.1) 77 : cluster [DBG] 6.b scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:20.205290+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 1236992 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:21.205507+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 1228800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:22.205717+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:51.678881+0000 osd.1 (osd.1) 78 : cluster [DBG] 4.2 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:51.689495+0000 osd.1 (osd.1) 79 : cluster [DBG] 4.2 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 1228800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 79)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:51.678881+0000 osd.1 (osd.1) 78 : cluster [DBG] 4.2 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:51.689495+0000 osd.1 (osd.1) 79 : cluster [DBG] 4.2 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:23.206076+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 1220608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 445187 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:24.206267+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.041762352s of 11.141496658s, submitted: 8
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 1204224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:25.206469+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:54.709627+0000 osd.1 (osd.1) 80 : cluster [DBG] 4.9 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:15:54.720249+0000 osd.1 (osd.1) 81 : cluster [DBG] 4.9 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 1204224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 81)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:54.709627+0000 osd.1 (osd.1) 80 : cluster [DBG] 4.9 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:15:54.720249+0000 osd.1 (osd.1) 81 : cluster [DBG] 4.9 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:26.206727+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 1204224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:27.206907+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 1204224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:28.209211+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 1196032 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 447598 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:29.209343+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 1196032 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:30.209477+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 1187840 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:31.210759+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 1187840 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:32.211219+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 1179648 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:33.213190+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:02.688453+0000 osd.1 (osd.1) 82 : cluster [DBG] 4.8 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:02.699130+0000 osd.1 (osd.1) 83 : cluster [DBG] 4.8 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 83)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:02.688453+0000 osd.1 (osd.1) 82 : cluster [DBG] 4.8 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:02.699130+0000 osd.1 (osd.1) 83 : cluster [DBG] 4.8 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 450009 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 1171456 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:34.218711+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 1171456 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:35.220436+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.987721443s of 10.996203423s, submitted: 4
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 1163264 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:36.220787+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:05.705804+0000 osd.1 (osd.1) 84 : cluster [DBG] 6.17 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:05.716373+0000 osd.1 (osd.1) 85 : cluster [DBG] 6.17 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 85)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:05.705804+0000 osd.1 (osd.1) 84 : cluster [DBG] 6.17 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:05.716373+0000 osd.1 (osd.1) 85 : cluster [DBG] 6.17 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 1155072 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:37.221178+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1146880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:38.221430+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 452422 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1146880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:39.221576+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1146880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:40.221852+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 1138688 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:41.222067+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:10.667645+0000 osd.1 (osd.1) 86 : cluster [DBG] 4.12 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:10.678206+0000 osd.1 (osd.1) 87 : cluster [DBG] 4.12 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 87)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:10.667645+0000 osd.1 (osd.1) 86 : cluster [DBG] 4.12 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:10.678206+0000 osd.1 (osd.1) 87 : cluster [DBG] 4.12 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 1138688 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:42.223089+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 1130496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:43.223285+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454835 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 1130496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:44.223461+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 1130496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:45.223646+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.999300957s of 10.010235786s, submitted: 4
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 1114112 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:46.223806+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:15.716032+0000 osd.1 (osd.1) 88 : cluster [DBG] 6.2 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:15.726555+0000 osd.1 (osd.1) 89 : cluster [DBG] 6.2 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 1114112 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 89)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:15.716032+0000 osd.1 (osd.1) 88 : cluster [DBG] 6.2 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:15.726555+0000 osd.1 (osd.1) 89 : cluster [DBG] 6.2 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:47.223991+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 1105920 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:48.225270+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:17.771947+0000 osd.1 (osd.1) 90 : cluster [DBG] 4.14 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:17.782456+0000 osd.1 (osd.1) 91 : cluster [DBG] 4.14 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 462072 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 1097728 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 91)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:17.771947+0000 osd.1 (osd.1) 90 : cluster [DBG] 4.14 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:17.782456+0000 osd.1 (osd.1) 91 : cluster [DBG] 4.14 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:49.225473+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:18.761646+0000 osd.1 (osd.1) 92 : cluster [DBG] 6.1c scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:18.775734+0000 osd.1 (osd.1) 93 : cluster [DBG] 6.1c scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 1089536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 93)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:18.761646+0000 osd.1 (osd.1) 92 : cluster [DBG] 6.1c scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:18.775734+0000 osd.1 (osd.1) 93 : cluster [DBG] 6.1c scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:50.225696+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:19.802211+0000 osd.1 (osd.1) 94 : cluster [DBG] 2.1b scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:19.812842+0000 osd.1 (osd.1) 95 : cluster [DBG] 2.1b scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 1089536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 95)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:19.802211+0000 osd.1 (osd.1) 94 : cluster [DBG] 2.1b scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:19.812842+0000 osd.1 (osd.1) 95 : cluster [DBG] 2.1b scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:51.225966+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:20.833098+0000 osd.1 (osd.1) 96 : cluster [DBG] 5.13 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:20.843600+0000 osd.1 (osd.1) 97 : cluster [DBG] 5.13 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1146880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 97)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:20.833098+0000 osd.1 (osd.1) 96 : cluster [DBG] 5.13 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:20.843600+0000 osd.1 (osd.1) 97 : cluster [DBG] 5.13 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:52.226247+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:21.786475+0000 osd.1 (osd.1) 98 : cluster [DBG] 5.12 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:21.797066+0000 osd.1 (osd.1) 99 : cluster [DBG] 5.12 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 1138688 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 99)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:21.786475+0000 osd.1 (osd.1) 98 : cluster [DBG] 5.12 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:21.797066+0000 osd.1 (osd.1) 99 : cluster [DBG] 5.12 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:53.226483+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:22.836857+0000 osd.1 (osd.1) 100 : cluster [DBG] 2.15 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:22.847295+0000 osd.1 (osd.1) 101 : cluster [DBG] 2.15 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 474135 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 1114112 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 101)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:22.836857+0000 osd.1 (osd.1) 100 : cluster [DBG] 2.15 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:22.847295+0000 osd.1 (osd.1) 101 : cluster [DBG] 2.15 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:54.226790+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:23.816961+0000 osd.1 (osd.1) 102 : cluster [DBG] 5.9 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:23.827507+0000 osd.1 (osd.1) 103 : cluster [DBG] 5.9 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 1089536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 103)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:23.816961+0000 osd.1 (osd.1) 102 : cluster [DBG] 5.9 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:23.827507+0000 osd.1 (osd.1) 103 : cluster [DBG] 5.9 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:55.227008+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:24.775991+0000 osd.1 (osd.1) 104 : cluster [DBG] 5.11 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:24.786408+0000 osd.1 (osd.1) 105 : cluster [DBG] 5.11 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 1089536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 105)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:24.775991+0000 osd.1 (osd.1) 104 : cluster [DBG] 5.11 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:24.786408+0000 osd.1 (osd.1) 105 : cluster [DBG] 5.11 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:56.227270+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.001464844s of 11.047466278s, submitted: 18
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 1081344 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:57.227438+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:26.763623+0000 osd.1 (osd.1) 106 : cluster [DBG] 2.a scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:26.774202+0000 osd.1 (osd.1) 107 : cluster [DBG] 2.a scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 107)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:26.763623+0000 osd.1 (osd.1) 106 : cluster [DBG] 2.a scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:26.774202+0000 osd.1 (osd.1) 107 : cluster [DBG] 2.a scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 1073152 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:58.227640+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 478959 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 1073152 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:59.227779+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 1048576 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:00.227945+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 1048576 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:01.228224+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:02.228795+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:03.229289+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 478959 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:04.229465+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:05.229797+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:06.230258+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:07.230795+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:08.231786+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 478959 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:09.231954+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:10.232207+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.007675171s of 14.012120247s, submitted: 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:11.232840+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:40.775772+0000 osd.1 (osd.1) 108 : cluster [DBG] 5.16 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:40.786291+0000 osd.1 (osd.1) 109 : cluster [DBG] 5.16 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 109)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:40.775772+0000 osd.1 (osd.1) 108 : cluster [DBG] 5.16 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:40.786291+0000 osd.1 (osd.1) 109 : cluster [DBG] 5.16 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:12.233531+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:13.233854+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:14.234231+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 481372 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:15.234391+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:44.744417+0000 osd.1 (osd.1) 110 : cluster [DBG] 2.d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:44.754904+0000 osd.1 (osd.1) 111 : cluster [DBG] 2.d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 111)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:44.744417+0000 osd.1 (osd.1) 110 : cluster [DBG] 2.d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:44.754904+0000 osd.1 (osd.1) 111 : cluster [DBG] 2.d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:16.234740+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:17.234902+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:18.235251+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:19.235596+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:48.714252+0000 osd.1 (osd.1) 112 : cluster [DBG] 2.5 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:48.724803+0000 osd.1 (osd.1) 113 : cluster [DBG] 2.5 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 486194 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 113)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:48.714252+0000 osd.1 (osd.1) 112 : cluster [DBG] 2.5 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:48.724803+0000 osd.1 (osd.1) 113 : cluster [DBG] 2.5 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:20.235874+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:49.751737+0000 osd.1 (osd.1) 114 : cluster [DBG] 2.4 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:49.762265+0000 osd.1 (osd.1) 115 : cluster [DBG] 2.4 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 115)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:49.751737+0000 osd.1 (osd.1) 114 : cluster [DBG] 2.4 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:49.762265+0000 osd.1 (osd.1) 115 : cluster [DBG] 2.4 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:21.236180+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.041901588s of 11.056705475s, submitted: 8
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:22.236584+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:51.831978+0000 osd.1 (osd.1) 116 : cluster [DBG] 2.17 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:51.842558+0000 osd.1 (osd.1) 117 : cluster [DBG] 2.17 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 117)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:51.831978+0000 osd.1 (osd.1) 116 : cluster [DBG] 2.17 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:51.842558+0000 osd.1 (osd.1) 117 : cluster [DBG] 2.17 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:23.236817+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:24.237009+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:53.859446+0000 osd.1 (osd.1) 118 : cluster [DBG] 2.7 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:53.869980+0000 osd.1 (osd.1) 119 : cluster [DBG] 2.7 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 493429 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 119)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:53.859446+0000 osd.1 (osd.1) 118 : cluster [DBG] 2.7 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:53.869980+0000 osd.1 (osd.1) 119 : cluster [DBG] 2.7 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:25.237236+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:54.843250+0000 osd.1 (osd.1) 120 : cluster [DBG] 4.10 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:54.853763+0000 osd.1 (osd.1) 121 : cluster [DBG] 4.10 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 121)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:54.843250+0000 osd.1 (osd.1) 120 : cluster [DBG] 4.10 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:54.853763+0000 osd.1 (osd.1) 121 : cluster [DBG] 4.10 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:26.237434+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:27.237699+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:56.879766+0000 osd.1 (osd.1) 122 : cluster [DBG] 6.1d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:16:56.893939+0000 osd.1 (osd.1) 123 : cluster [DBG] 6.1d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:28.238016+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 123)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:56.879766+0000 osd.1 (osd.1) 122 : cluster [DBG] 6.1d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:16:56.893939+0000 osd.1 (osd.1) 123 : cluster [DBG] 6.1d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:29.238185+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 498255 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:30.238531+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:31.238993+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.050240517s of 10.068576813s, submitted: 8
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:32.239718+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:01.900540+0000 osd.1 (osd.1) 124 : cluster [DBG] 2.6 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:01.911116+0000 osd.1 (osd.1) 125 : cluster [DBG] 2.6 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 125)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:01.900540+0000 osd.1 (osd.1) 124 : cluster [DBG] 2.6 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:01.911116+0000 osd.1 (osd.1) 125 : cluster [DBG] 2.6 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:33.240610+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:34.241106+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 500666 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:35.241433+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:36.241756+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:37.241919+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:38.242414+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:39.242581+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:08.918672+0000 osd.1 (osd.1) 126 : cluster [DBG] 5.1 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:08.929248+0000 osd.1 (osd.1) 127 : cluster [DBG] 5.1 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 503077 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 127)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:08.918672+0000 osd.1 (osd.1) 126 : cluster [DBG] 5.1 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:08.929248+0000 osd.1 (osd.1) 127 : cluster [DBG] 5.1 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:40.243331+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:41.243859+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:42.244281+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:43.244695+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.022533417s of 12.031448364s, submitted: 4
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:44.244931+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:13.932561+0000 osd.1 (osd.1) 128 : cluster [DBG] 2.9 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:13.943103+0000 osd.1 (osd.1) 129 : cluster [DBG] 2.9 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 505488 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 129)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:13.932561+0000 osd.1 (osd.1) 128 : cluster [DBG] 2.9 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:13.943103+0000 osd.1 (osd.1) 129 : cluster [DBG] 2.9 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:45.245268+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:46.245461+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:47.245724+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:16.831181+0000 osd.1 (osd.1) 130 : cluster [DBG] 5.f scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:16.841889+0000 osd.1 (osd.1) 131 : cluster [DBG] 5.f scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 131)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:16.831181+0000 osd.1 (osd.1) 130 : cluster [DBG] 5.f scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:16.841889+0000 osd.1 (osd.1) 131 : cluster [DBG] 5.f scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:48.246343+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:49.246534+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 507899 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:50.246722+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:51.247203+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:52.247433+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:21.891339+0000 osd.1 (osd.1) 132 : cluster [DBG] 5.1a scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:21.901919+0000 osd.1 (osd.1) 133 : cluster [DBG] 5.1a scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 133)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:21.891339+0000 osd.1 (osd.1) 132 : cluster [DBG] 5.1a scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:21.901919+0000 osd.1 (osd.1) 133 : cluster [DBG] 5.1a scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:53.247691+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:54.247890+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 510312 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.010965347s of 11.032461166s, submitted: 6
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:55.248100+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:24.965100+0000 osd.1 (osd.1) 134 : cluster [DBG] 2.3 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:24.975677+0000 osd.1 (osd.1) 135 : cluster [DBG] 2.3 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 135)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:24.965100+0000 osd.1 (osd.1) 134 : cluster [DBG] 2.3 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:24.975677+0000 osd.1 (osd.1) 135 : cluster [DBG] 2.3 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:56.248365+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:57.248550+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:26.996725+0000 osd.1 (osd.1) 136 : cluster [DBG] 5.1d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:27.007250+0000 osd.1 (osd.1) 137 : cluster [DBG] 5.1d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 137)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:26.996725+0000 osd.1 (osd.1) 136 : cluster [DBG] 5.1d scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:27.007250+0000 osd.1 (osd.1) 137 : cluster [DBG] 5.1d scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:58.249018+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:59.249235+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 515136 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:00.249398+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:01.249664+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:31.013697+0000 osd.1 (osd.1) 138 : cluster [DBG] 5.19 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:31.024318+0000 osd.1 (osd.1) 139 : cluster [DBG] 5.19 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 139)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:31.013697+0000 osd.1 (osd.1) 138 : cluster [DBG] 5.19 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:31.024318+0000 osd.1 (osd.1) 139 : cluster [DBG] 5.19 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:02.250055+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:32.043399+0000 osd.1 (osd.1) 140 : cluster [DBG] 5.c scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:32.053979+0000 osd.1 (osd.1) 141 : cluster [DBG] 5.c scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 141)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:32.043399+0000 osd.1 (osd.1) 140 : cluster [DBG] 5.c scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:32.053979+0000 osd.1 (osd.1) 141 : cluster [DBG] 5.c scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:03.250362+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:33.062053+0000 osd.1 (osd.1) 142 : cluster [DBG] 5.18 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  will send 2026-01-29T09:17:33.072649+0000 osd.1 (osd.1) 143 : cluster [DBG] 5.18 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client handle_log_ack log(last 143)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:33.062053+0000 osd.1 (osd.1) 142 : cluster [DBG] 5.18 scrub starts
Jan 29 09:44:41 compute-0 ceph-osd[87035]: log_client  logged 2026-01-29T09:17:33.072649+0000 osd.1 (osd.1) 143 : cluster [DBG] 5.18 scrub ok
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:04.250632+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:05.250791+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 671744 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:06.250965+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 671744 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:07.251155+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 671744 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:08.251325+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 663552 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:09.251740+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 663552 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:10.252067+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 663552 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:11.252434+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64348160 unmapped: 655360 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:12.252906+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64348160 unmapped: 655360 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:13.253092+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 647168 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:14.253476+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 647168 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:15.253728+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 638976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:16.253919+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 638976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:17.254272+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 630784 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:18.254604+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 630784 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:19.254886+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 630784 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:20.255176+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:21.255506+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:22.255890+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64397312 unmapped: 606208 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:23.256285+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64397312 unmapped: 606208 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:24.256542+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64397312 unmapped: 606208 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:25.256763+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 598016 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:26.256948+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:27.257166+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64421888 unmapped: 581632 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:28.257335+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64421888 unmapped: 581632 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:29.257558+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:30.257813+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:31.258034+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:32.258276+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64438272 unmapped: 565248 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:33.258425+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64438272 unmapped: 565248 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:34.258667+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 557056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:35.258823+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 557056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:36.259104+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 548864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:37.259380+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 540672 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:38.259598+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 540672 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:39.259738+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 532480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:40.259866+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 532480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:41.260750+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 524288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:42.260986+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 524288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:43.261151+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:44.261376+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:45.261535+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:46.261743+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:47.262051+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:48.262262+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:49.262472+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:50.262699+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:51.262898+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:52.263174+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:53.263395+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 475136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:54.263598+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 475136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:55.263758+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 475136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:56.263927+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:57.264183+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:58.264340+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:59.264497+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:00.264870+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:01.265055+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 450560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:02.265202+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 450560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:03.265351+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 442368 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:04.265482+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 442368 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:05.265614+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:06.265885+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:07.266067+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:08.266243+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 425984 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:09.266384+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 425984 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:10.266590+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 417792 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:11.266727+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 417792 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:12.266910+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:13.267055+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:14.267201+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:15.267318+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 401408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:16.267461+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 401408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:17.267635+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 393216 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:18.267800+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 393216 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:19.267993+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:20.268239+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:21.268456+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:22.268720+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:23.268958+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:24.269119+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:25.269394+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 368640 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:26.269560+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 368640 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:27.269756+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:28.270019+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:29.270236+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 352256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:30.270405+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 352256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:31.270612+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 352256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:32.270906+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:33.271106+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:34.271360+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:35.271624+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:36.271881+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:37.272094+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:38.272317+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:39.272656+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 294912 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:40.272940+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 294912 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:41.273205+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 286720 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:42.273614+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 286720 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:43.273924+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 278528 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:44.274405+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 278528 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:45.274664+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 278528 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:46.274952+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 270336 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:47.275249+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 270336 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:48.275478+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 270336 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:49.275750+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 262144 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:50.276033+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 262144 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:51.276326+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: no keepalive since 2026-01-29T09:19:21.276400+0000 (2106-02-07T06:28:15.999913+0000 seconds), reconnecting
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _reopen_session rank -1
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _add_conns ranks=[0]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient(hunting): picked mon.compute-0 con 0x5579fea9fc00 addr [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient(hunting): start opening mon connection
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient(hunting): _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient(hunting): get_auth_request con 0x5579fea9fc00 auth_method 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient(hunting): get_auth_request method 2 preferred_modes [2,1]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient(hunting): _init_auth method 2
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient(hunting): _init_auth already have auth, reseting
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient(hunting): handle_auth_reply_more payload 9
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient(hunting): handle_auth_reply_more payload_len 9
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient(hunting): handle_auth_reply_more responding with 132 bytes
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient(hunting): handle_auth_done global_id 14197 payload 293
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _finish_hunting 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: found mon.compute-0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _finish_auth 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:51.277778+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_monmap mon_map magic: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient:  got monmap 1 from mon.compute-0 (according to old e1)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: dump:
                                           epoch 1
                                           fsid 3fdce3ca-565d-5459-88e8-1ffe58b48437
                                           last_changed 2026-01-29T09:11:34.210489+0000
                                           created 2026-01-29T09:11:34.210489+0000
                                           min_mon_release 20 (tentacle)
                                           election_strategy: 1
                                           0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_config config(9 keys)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: set_mon_vals no callback set
Jan 29 09:44:41 compute-0 ceph-osd[87035]: mgrc handle_mgr_map Got map version 9
Jan 29 09:44:41 compute-0 ceph-osd[87035]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1795618739,v1:192.168.122.100:6801/1795618739]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 253952 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 253952 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 253952 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:55.777397+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:56.777634+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:57.777874+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:58.778066+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:59.778282+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 155648 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:00.778479+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 155648 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:01.778679+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 147456 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:02.779187+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 147456 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:03.779407+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 139264 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:04.779606+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 139264 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:05.779762+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 139264 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:06.779970+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 131072 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:07.780326+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 131072 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:08.780517+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 131072 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:09.780741+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 122880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:10.780904+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 122880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:11.781112+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 114688 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:12.781580+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 114688 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:13.781752+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 106496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:14.781965+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 106496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:15.782183+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 106496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:16.782365+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64905216 unmapped: 98304 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:17.782586+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64905216 unmapped: 98304 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:18.782855+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64913408 unmapped: 90112 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:19.783041+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64913408 unmapped: 90112 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:20.783360+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64913408 unmapped: 90112 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:21.783623+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 81920 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:22.783936+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 81920 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:23.784299+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 73728 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:24.784648+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 73728 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:25.784988+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 73728 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:26.785274+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64937984 unmapped: 65536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:27.785561+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64937984 unmapped: 65536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:28.785769+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64937984 unmapped: 65536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:29.785966+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64946176 unmapped: 57344 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:30.786300+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64946176 unmapped: 57344 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:31.786582+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64954368 unmapped: 49152 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:32.786941+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64954368 unmapped: 49152 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:33.787215+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 40960 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:34.787471+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 40960 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:35.787722+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 40960 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:36.787912+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 32768 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:37.788051+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 32768 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:38.788291+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64978944 unmapped: 24576 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:39.788470+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64978944 unmapped: 24576 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:40.788668+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64978944 unmapped: 24576 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:41.788827+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 16384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:42.789073+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 16384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:43.789300+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 8192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:44.789478+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 8192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:45.789623+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 0 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:46.789839+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 0 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:47.790078+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65011712 unmapped: 1040384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:48.790319+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65011712 unmapped: 1040384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:49.790586+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65011712 unmapped: 1040384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:50.790750+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 1032192 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:51.790963+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 1032192 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:52.791210+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 1024000 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:53.791345+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 1024000 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:54.791489+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:55.791642+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:56.791876+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:57.792053+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:58.792324+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:59.792568+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:00.792854+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65052672 unmapped: 999424 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:01.793225+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65052672 unmapped: 999424 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:02.793787+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 991232 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:03.794057+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 991232 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:04.794301+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 983040 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:05.794513+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 983040 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:06.794717+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 983040 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:07.794876+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 974848 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:08.795089+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 974848 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:09.795301+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 966656 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:10.795479+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 966656 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:11.795699+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 966656 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:12.795913+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 958464 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:13.796073+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 958464 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:14.796278+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:15.796487+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:16.796930+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:17.797099+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65110016 unmapped: 942080 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:18.797222+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65110016 unmapped: 942080 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:19.797382+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:20.797593+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:21.797785+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 925696 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:22.797957+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 925696 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:23.798103+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 925696 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:24.798290+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:25.798430+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:26.798654+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:27.798806+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:28.799355+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:29.799546+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:30.799787+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:31.799980+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:32.800206+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:33.800425+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:34.800631+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:35.800904+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:36.801179+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:37.801371+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:38.801594+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:39.801892+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:40.802126+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:41.802380+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:42.802785+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:43.803019+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:44.803278+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:45.803525+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:46.803732+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:47.803937+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:48.837676+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:49.837951+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:50.838180+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:51.838373+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:52.838818+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:53.839027+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:54.839245+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:55.839419+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:56.839653+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:57.839905+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:58.840085+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:59.840236+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:00.840473+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:01.840705+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:02.841055+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:03.841227+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:04.841377+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:05.841528+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:06.841676+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:07.841877+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:08.842064+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:09.842256+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:10.842433+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:11.842648+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:12.843557+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:13.843737+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:14.843866+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:15.844014+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:16.844189+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:17.844379+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:18.844605+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:19.844825+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:20.845009+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:21.845200+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:22.845447+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65331200 unmapped: 720896 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:23.845639+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65331200 unmapped: 720896 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:24.845788+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65339392 unmapped: 712704 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:25.845931+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65339392 unmapped: 712704 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:26.846111+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65339392 unmapped: 712704 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:27.846313+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65347584 unmapped: 704512 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:28.846495+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65347584 unmapped: 704512 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:29.846688+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:30.846901+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:31.847124+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:32.847382+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:33.847536+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:34.847678+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:35.847826+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:36.848104+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65380352 unmapped: 671744 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:37.848339+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65380352 unmapped: 671744 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:38.848501+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65380352 unmapped: 671744 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:39.848642+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 663552 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:40.848783+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 663552 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:41.848984+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 663552 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:42.849216+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65396736 unmapped: 655360 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:43.849407+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65396736 unmapped: 655360 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:44.849580+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 647168 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:45.849796+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 647168 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:46.849995+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 647168 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:47.850160+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65413120 unmapped: 638976 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:48.850320+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65413120 unmapped: 638976 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:49.850441+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65421312 unmapped: 630784 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:50.850600+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65421312 unmapped: 630784 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:51.850763+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65429504 unmapped: 622592 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:52.850947+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65429504 unmapped: 622592 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:53.851080+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65437696 unmapped: 614400 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:54.851571+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65437696 unmapped: 614400 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:55.851727+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65437696 unmapped: 614400 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:56.851918+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65445888 unmapped: 606208 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:57.852082+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65445888 unmapped: 606208 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:58.852216+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65445888 unmapped: 606208 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:59.852356+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65454080 unmapped: 598016 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:00.852507+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 589824 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:01.852672+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 589824 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:02.852914+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 589824 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:03.853086+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65470464 unmapped: 581632 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:04.853245+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65470464 unmapped: 581632 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:05.853575+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65470464 unmapped: 581632 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:06.853872+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65478656 unmapped: 573440 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:07.854223+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65478656 unmapped: 573440 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:08.854381+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 565248 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:09.854531+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 565248 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:10.854718+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 565248 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:11.855087+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65495040 unmapped: 557056 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:12.855528+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65495040 unmapped: 557056 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:13.855865+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65495040 unmapped: 557056 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:14.856259+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65503232 unmapped: 548864 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:15.856685+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65503232 unmapped: 548864 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:16.856875+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65511424 unmapped: 540672 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:17.857093+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65511424 unmapped: 540672 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:18.857400+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65519616 unmapped: 532480 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:19.857589+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65519616 unmapped: 532480 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:20.857848+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65519616 unmapped: 532480 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:21.858121+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 524288 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:22.858372+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 524288 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:23.858607+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65536000 unmapped: 516096 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:24.858775+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65536000 unmapped: 516096 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:25.858923+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65536000 unmapped: 516096 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:26.859083+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65544192 unmapped: 507904 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:27.859287+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Cumulative writes: 4322 writes, 19K keys, 4322 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4322 writes, 406 syncs, 10.65 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4322 writes, 19K keys, 4322 commit groups, 1.0 writes per commit group, ingest: 16.03 MB, 0.03 MB/s
                                           Interval WAL: 4322 writes, 406 syncs, 10.65 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 425984 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:28.859476+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 425984 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:29.859695+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 417792 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:30.859864+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 417792 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:31.860032+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 417792 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:32.860243+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:33.860434+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 409600 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:34.860584+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 409600 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:35.860729+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 401408 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:36.860887+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 401408 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:37.861035+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 401408 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:38.861239+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 393216 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:39.861381+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 393216 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:40.861504+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 385024 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:41.861727+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 385024 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:42.862042+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 385024 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:43.862338+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 376832 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:44.862526+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 385024 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:45.862745+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 376832 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:46.863053+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 376832 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:47.863861+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 376832 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:48.864096+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 368640 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:49.864345+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 368640 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:50.864536+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 360448 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:51.864791+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 360448 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:52.865057+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 360448 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:53.865292+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65699840 unmapped: 352256 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:54.865479+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65699840 unmapped: 352256 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:55.865658+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 344064 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:56.865879+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 344064 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:57.866089+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 344064 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:58.866226+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 335872 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:59.866416+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 335872 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:00.866579+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 327680 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:01.866788+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 327680 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:02.867029+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 319488 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:03.867215+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 319488 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:04.867408+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 319488 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:05.867606+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 311296 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:06.867749+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 311296 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:07.867918+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 303104 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:08.868084+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 303104 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:09.868277+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 303104 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:10.868447+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 303104 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:11.868624+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 294912 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:12.868873+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 294912 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:13.869018+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 286720 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:14.869160+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 286720 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:15.869340+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 278528 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:16.869466+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 278528 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:17.869629+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 278528 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:18.869783+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 270336 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:19.869982+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 270336 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:20.870156+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 262144 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:21.870315+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 262144 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:22.870501+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 262144 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:23.870628+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65798144 unmapped: 253952 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:24.870758+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65798144 unmapped: 253952 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:25.870898+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 245760 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:26.871043+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 245760 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:27.871197+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 245760 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:28.871324+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 237568 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:29.871516+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 237568 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:30.871660+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 237568 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:31.871852+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 229376 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:32.872099+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 229376 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:33.872223+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 229376 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:34.872409+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 221184 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:35.872565+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 221184 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:36.872756+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 212992 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:37.872905+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 212992 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:38.873187+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 212992 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:39.873346+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 204800 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:40.873480+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 204800 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:41.873604+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 196608 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:42.873785+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 196608 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:43.873945+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 196608 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:44.874081+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 188416 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:45.874187+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 188416 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:46.874332+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 180224 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:47.874530+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 180224 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:48.874662+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 172032 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:49.874801+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 172032 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:50.874940+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 172032 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:51.875070+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 163840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:52.875256+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 163840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:53.875390+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 163840 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:54.875520+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 155648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:55.875661+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 155648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:56.875819+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 155648 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:57.875984+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 147456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:58.876199+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 147456 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:59.876500+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 139264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:00.876705+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 139264 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:01.876853+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 131072 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:02.877083+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 131072 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:03.877346+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 122880 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:04.877488+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 122880 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:05.877743+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 122880 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:06.877970+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 114688 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:07.878199+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 114688 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:08.878361+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 106496 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:09.878571+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 106496 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:10.878727+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 106496 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:11.878937+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 98304 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:12.879212+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 98304 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:13.879367+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 98304 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:14.879598+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 90112 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:15.879773+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 90112 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:16.879926+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 81920 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:17.880064+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 81920 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:18.880298+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 73728 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:19.880530+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 73728 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:20.880686+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 73728 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:21.880857+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 65536 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:22.881070+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 65536 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:23.881226+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 57344 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:24.881460+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 57344 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:25.881641+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 57344 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:26.881818+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 57344 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:27.881989+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 57344 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:28.882225+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:29.882413+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:30.882615+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:31.882802+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:32.882992+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:33.883236+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:34.883381+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:35.883583+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:36.883718+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:37.883838+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:38.884608+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:39.884738+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:40.884957+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:41.885168+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:42.885518+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:43.885719+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:44.885935+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:45.886144+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:46.886370+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:48.578355+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:49.578551+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:50.578718+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:51.578855+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:52.579053+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:53.579295+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:54.579460+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:55.579664+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:56.579841+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:57.579998+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:58.580223+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:59.580437+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:00.580643+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:01.580829+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:02.581051+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:03.581323+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:04.581528+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:05.581721+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:06.581965+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:07.582175+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:08.582363+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:09.582578+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 49152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:10.582996+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:11.583230+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:12.583394+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:13.583630+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:14.583808+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:15.583973+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:16.584115+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:17.584314+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:18.584453+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:19.584632+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:20.584814+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:21.584989+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:22.585125+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:23.585313+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:24.585455+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:25.585600+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:26.585783+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:27.585977+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:28.586092+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:29.586237+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:30.586374+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:31.586536+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:32.586706+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:33.586884+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:34.587062+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:35.587208+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:36.587351+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:37.587510+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:38.588020+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:39.588205+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:40.588495+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:41.588748+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:42.588963+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:43.589329+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:44.589553+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:45.589713+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:46.589910+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:47.590688+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:48.590882+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:49.591015+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:50.591191+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:51.591489+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:52.591626+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:53.591882+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:54.592045+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:55.592194+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:56.592347+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:57.592463+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:58.593059+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:59.593269+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:00.593403+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:01.593542+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:02.593682+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:03.593902+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:04.594058+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:05.594202+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:06.594382+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:07.594553+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:08.594704+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:09.594851+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:10.595028+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:11.595203+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:12.595397+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:13.595597+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:14.595745+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:15.595907+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:16.596076+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:17.596208+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:18.596398+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 40960 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:19.596601+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:20.596846+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:21.597099+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:22.597260+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:23.597519+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:24.597711+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:25.597867+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:26.598049+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:27.598185+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:28.598359+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:29.598550+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:30.598722+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:31.598871+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:32.599032+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:33.599189+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 32768 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:34.599338+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:35.599497+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:36.599648+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:37.599787+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:38.599984+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:39.600200+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:40.600371+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:41.600527+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:42.600700+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:43.600923+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:44.601187+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:45.601308+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:46.601439+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:47.601572+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:48.601817+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:49.601937+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:50.602213+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:51.602400+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:52.602609+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:53.602924+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:54.603185+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:55.603461+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:56.603629+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:57.603796+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:58.603971+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:59.604162+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:00.604489+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:01.604632+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:02.604799+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:03.605001+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:04.605166+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:05.605344+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:06.605544+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:07.605716+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:08.605866+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:09.605997+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:10.606192+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:11.606328+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:12.606483+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:13.606650+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:14.606804+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:15.606989+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:16.608604+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:17.608785+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:18.609011+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:19.609272+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:20.609509+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:21.609689+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:22.610290+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:23.610709+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 24576 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:24.610920+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:25.611253+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:26.611564+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:27.611709+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:28.612063+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:29.612230+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:30.612414+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:31.612556+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:32.612724+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:33.613033+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:34.613224+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:35.613379+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:36.613504+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:37.613631+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:38.613758+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:39.614049+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:40.614824+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:41.614984+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:42.615168+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 16384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:43.615377+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: mgrc ms_handle_reset ms_handle_reset con 0x5579fcf14000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1795618739
Jan 29 09:44:41 compute-0 ceph-osd[87035]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1795618739,v1:192.168.122.100:6801/1795618739]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: get_auth_request con 0x5579fd00c400 auth_method 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: mgrc handle_mgr_configure stats_period=5
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 827392 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:44.615499+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 ms_handle_reset con 0x5579fd8ccc00 session 0x5579fd0b7340
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fc7b2400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:45.615648+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 ms_handle_reset con 0x5579fd8cd000 session 0x5579fce748c0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd902400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:46.615766+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:47.615872+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:48.616021+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:49.616161+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:50.616305+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:51.616454+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:52.616600+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:53.616767+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:54.616922+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:55.617072+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:56.617214+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:57.617391+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:58.617557+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:59.617689+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:00.617871+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:01.618023+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:02.618212+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:03.618888+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:04.619626+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:05.621237+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:06.621365+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:07.621510+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:08.621620+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:09.621743+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:10.621875+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:11.622092+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:12.622199+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:13.622370+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:14.622503+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:15.622621+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:16.622927+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:17.623056+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:18.623180+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:19.623398+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:20.623520+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:21.623668+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:22.623826+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:23.623993+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:24.624124+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:25.624328+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:26.624463+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:27.624597+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:28.624728+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:29.624851+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:30.624960+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:31.625098+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:32.625222+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:33.625358+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:34.625549+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:35.625712+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:36.625868+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:37.626029+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:38.626192+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:39.626441+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:40.626574+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:41.626692+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:42.626824+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:43.626992+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:44.627104+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:45.627243+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:46.627454+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:47.627647+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:48.627817+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:49.627943+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:50.628074+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:51.628263+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:52.628430+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:53.628634+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:54.628772+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:55.628914+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:56.629059+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:57.629219+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:58.629393+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:59.629539+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:00.629702+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:01.629879+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:02.630077+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:03.630286+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:04.630419+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:05.630580+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:06.630704+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:07.630836+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:08.631005+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:09.631161+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:10.631335+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:11.631452+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:12.631605+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:13.631768+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:14.631942+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:15.632086+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:16.632220+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:17.632367+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:18.632550+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:19.632745+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:20.632897+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:21.633015+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:22.633200+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:23.633419+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:24.633585+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:25.633715+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:26.633834+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:27.633981+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:28.634167+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:29.634252+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:30.634368+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:31.634514+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:32.634697+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:33.634879+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:34.635017+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:35.635176+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:36.635325+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:37.635489+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:38.635604+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:39.635763+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:40.635921+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:41.636049+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:42.636182+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:43.636371+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:44.636486+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:45.636614+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:46.636762+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:47.636847+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:48.636955+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:49.637104+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:50.637457+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:51.637643+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:52.637818+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:53.638045+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:54.638200+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:55.638341+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:56.638475+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:57.638664+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:58.638839+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:59.639055+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:00.639215+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:01.639391+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:02.639896+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:03.641058+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:04.641609+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:05.641749+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:06.642380+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:07.642513+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:08.642719+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:09.642917+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:10.643120+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:11.643395+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:12.643586+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:13.643790+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:14.643953+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:15.644122+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:16.644284+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:17.644432+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:18.644568+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:19.644820+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:20.644979+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:21.645196+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:22.645387+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:23.645618+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:24.645777+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:25.645944+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:26.646188+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:27.646354+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:28.646594+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:29.646788+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:30.646961+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:31.647166+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:32.647305+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:33.647493+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:34.647624+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:35.647849+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:36.648044+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:37.648305+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:38.648511+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:39.648698+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:40.648873+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:41.649052+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:42.649255+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:43.649487+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:44.649657+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:45.649818+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:46.649970+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:47.650197+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:48.650359+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:49.650544+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:50.650732+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:51.650896+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:52.651039+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:53.651210+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:54.651379+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:55.651639+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:56.651851+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:57.652012+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:58.652214+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:59.652367+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:00.652672+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:01.652848+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:02.653068+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:03.653282+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:04.653433+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:05.653571+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:06.653738+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:07.653947+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:08.654190+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:09.654385+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:10.654546+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:11.654718+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:12.654911+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:13.655202+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:14.655403+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:15.655559+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:16.655757+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:17.655945+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:18.656114+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:19.656380+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:20.656910+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:21.657061+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:22.657194+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:23.657446+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:24.657688+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:25.657880+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:26.658076+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:27.658276+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:28.658472+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:29.658627+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:30.658822+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:31.658990+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:32.659187+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:33.659370+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:34.659518+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:35.659740+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:36.659884+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:37.660023+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:38.660203+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:39.660366+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:40.660569+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:41.660747+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:42.660931+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:43.661208+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:44.661353+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:45.661544+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:46.661715+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:47.661894+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:48.662086+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:49.662233+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:50.662360+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:51.662519+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:52.662663+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:53.662868+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:54.663045+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:55.663209+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:56.663356+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:57.663542+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:58.663737+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:59.663908+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:00.664091+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:01.664308+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:02.664474+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:03.664713+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:04.664912+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:05.665058+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:06.665204+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:07.665379+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522373 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:08.665554+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cd000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9e4a8/0xe6000, compress 0x0/0x0/0x0, omap 0x573d, meta 0x1a2a8c3), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:09.665746+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 688128 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 handle_osd_map epochs [47,47], i have 46, src has [1,47]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 46 handle_osd_map epochs [46,47], i have 47, src has [1,47]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 914.842529297s of 914.901733398s, submitted: 10
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:10.665853+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 47 handle_osd_map epochs [48,48], i have 47, src has [1,48]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 47 handle_osd_map epochs [47,48], i have 48, src has [1,48]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:11.666016+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 17293312 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 48 handle_osd_map epochs [48,49], i have 48, src has [1,49]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 49 ms_handle_reset con 0x5579fd8cd000 session 0x5579fef88a80
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:12.666191+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 49 heartbeat osd_stat(store_statfs(0x4fd466000/0x0/0x4ffc00000, data 0xd12692/0xd60000, compress 0x0/0x0/0x0, omap 0x5ee7, meta 0x1a2a119), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 17285120 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 599269 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:13.666338+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 17063936 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 49 handle_osd_map epochs [49,50], i have 49, src has [1,50]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 50 ms_handle_reset con 0x5579ff11e400 session 0x5579ff2f21c0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:14.666506+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 17063936 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:15.666636+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 50 heartbeat osd_stat(store_statfs(0x4fcc66000/0x0/0x4ffc00000, data 0x1513c8b/0x1564000, compress 0x0/0x0/0x0, omap 0x64aa, meta 0x1a29b56), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 17063936 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:16.666792+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 17063936 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:17.666951+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 17063936 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 645844 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:18.667298+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 17063936 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 50 handle_osd_map epochs [51,51], i have 50, src has [1,51]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:19.667471+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 51 heartbeat osd_stat(store_statfs(0x4fcc63000/0x0/0x4ffc00000, data 0x151513b/0x1567000, compress 0x0/0x0/0x0, omap 0x6787, meta 0x1a29879), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 17031168 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:20.667687+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 17031168 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 51 heartbeat osd_stat(store_statfs(0x4fcc63000/0x0/0x4ffc00000, data 0x151513b/0x1567000, compress 0x0/0x0/0x0, omap 0x6787, meta 0x1a29879), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:21.667853+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 17031168 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 51 heartbeat osd_stat(store_statfs(0x4fcc63000/0x0/0x4ffc00000, data 0x151513b/0x1567000, compress 0x0/0x0/0x0, omap 0x6787, meta 0x1a29879), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:22.668003+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 17031168 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 648472 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:23.668184+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 17031168 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:24.668334+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 17031168 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:25.668486+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 17031168 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:26.668654+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 17031168 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:27.668779+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 17031168 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 648472 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Cumulative writes: 4402 writes, 20K keys, 4402 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4402 writes, 440 syncs, 10.00 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 80 writes, 336 keys, 80 commit groups, 1.0 writes per commit group, ingest: 0.20 MB, 0.00 MB/s
                                           Interval WAL: 80 writes, 34 syncs, 2.35 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.56              0.00         1    0.561       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.59              0.00         1    0.590       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.067       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.070       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5579fb5c7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 51 heartbeat osd_stat(store_statfs(0x4fcc63000/0x0/0x4ffc00000, data 0x151513b/0x1567000, compress 0x0/0x0/0x0, omap 0x6787, meta 0x1a29879), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:28.668958+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:29.669074+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:30.669201+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:31.669381+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 51 heartbeat osd_stat(store_statfs(0x4fcc63000/0x0/0x4ffc00000, data 0x151513b/0x1567000, compress 0x0/0x0/0x0, omap 0x6787, meta 0x1a29879), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:32.669525+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 648472 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:33.669679+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:34.669879+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 51 heartbeat osd_stat(store_statfs(0x4fcc63000/0x0/0x4ffc00000, data 0x151513b/0x1567000, compress 0x0/0x0/0x0, omap 0x6787, meta 0x1a29879), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:35.670043+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:36.670198+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:37.670332+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 648472 data_alloc: 218103808 data_used: 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:38.670459+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:39.670623+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 51 heartbeat osd_stat(store_statfs(0x4fcc63000/0x0/0x4ffc00000, data 0x151513b/0x1567000, compress 0x0/0x0/0x0, omap 0x6787, meta 0x1a29879), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:40.670774+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 16998400 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:41.670998+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e800
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 16867328 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11ec00
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:42.671112+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.686981201s of 32.861743927s, submitted: 46
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 51 handle_osd_map epochs [52,52], i have 51, src has [1,52]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 52 ms_handle_reset con 0x5579ff11e800 session 0x5579fefbbdc0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 52 ms_handle_reset con 0x5579ff11ec00 session 0x5579fef9fa40
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 16637952 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 654840 data_alloc: 218103808 data_used: 19
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:43.671290+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11f000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 52 ms_handle_reset con 0x5579ff11f000 session 0x5579ff2f3c00
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11f000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 52 ms_handle_reset con 0x5579ff11f000 session 0x5579fd0c3500
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 16809984 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:44.671417+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 52 heartbeat osd_stat(store_statfs(0x4fcc5e000/0x0/0x4ffc00000, data 0x1516b2b/0x156c000, compress 0x0/0x0/0x0, omap 0x6a1e, meta 0x1a295e2), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 52 handle_osd_map epochs [53,53], i have 53, src has [1,53]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 52 handle_osd_map epochs [53,53], i have 53, src has [1,53]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 16769024 heap: 83886080 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 53 ms_handle_reset con 0x5579ff11e400 session 0x5579fef89dc0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e800
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11f800
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 53 ms_handle_reset con 0x5579ff11f800 session 0x5579ff30bdc0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11f400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 53 ms_handle_reset con 0x5579ff11f400 session 0x5579ff30a8c0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:45.671540+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 15106048 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 53 heartbeat osd_stat(store_statfs(0x4fcc59000/0x0/0x4ffc00000, data 0x151813e/0x1571000, compress 0x0/0x0/0x0, omap 0x6ec1, meta 0x1a2913f), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050bc00
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:46.671664+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 53 handle_osd_map epochs [54,54], i have 53, src has [1,54]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 23248896 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 54 ms_handle_reset con 0x557a0050bc00 session 0x5579ff2f3a40
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:47.671805+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 22011904 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914755 data_alloc: 218103808 data_used: 19
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:48.671961+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 54 handle_osd_map epochs [55,55], i have 54, src has [1,55]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 22151168 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 55 ms_handle_reset con 0x5579ff11e800 session 0x5579fd223dc0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 55 ms_handle_reset con 0x5579ff11e400 session 0x5579fcc7a8c0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 55 heartbeat osd_stat(store_statfs(0x4f9c56000/0x0/0x4ffc00000, data 0x451973f/0x4573000, compress 0x0/0x0/0x0, omap 0x71b7, meta 0x1a28e49), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:49.672106+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 22151168 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11f000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:50.672249+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 55 handle_osd_map epochs [55,56], i have 56, src has [1,56]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 22093824 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 56 ms_handle_reset con 0x5579ff11f000 session 0x5579fd223dc0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:51.672379+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050b400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 19963904 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 56 handle_osd_map epochs [57,57], i have 56, src has [1,57]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 57 ms_handle_reset con 0x557a0050b400 session 0x5579fde81340
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:52.672562+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 19947520 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 679287 data_alloc: 218103808 data_used: 8138
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:53.672732+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050b000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.603509903s of 11.047701836s, submitted: 162
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 19890176 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 58 heartbeat osd_stat(store_statfs(0x4fcc51000/0x0/0x4ffc00000, data 0x151d579/0x1579000, compress 0x0/0x0/0x0, omap 0x7e47, meta 0x1a281b9), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:54.672885+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 59 ms_handle_reset con 0x557a0050b000 session 0x5579fcc7aa80
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 19881984 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050b000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:55.673027+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 59 handle_osd_map epochs [60,60], i have 59, src has [1,60]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 60 ms_handle_reset con 0x557a0050b000 session 0x5579fce75340
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 18653184 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:56.673219+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 60 handle_osd_map epochs [61,61], i have 60, src has [1,61]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 61 ms_handle_reset con 0x5579ff11e400 session 0x5579fef59340
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 18604032 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 61 handle_osd_map epochs [61,62], i have 61, src has [1,62]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:57.673377+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 18579456 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 701860 data_alloc: 218103808 data_used: 8138
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fcc3b000/0x0/0x4ffc00000, data 0x1524597/0x158b000, compress 0x0/0x0/0x0, omap 0x9675, meta 0x1a2698b), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050ac00
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:58.673545+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 18628608 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:59.673712+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 63 ms_handle_reset con 0x557a0050ac00 session 0x5579fcc7bdc0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fcc40000/0x0/0x4ffc00000, data 0x15245ba/0x158c000, compress 0x0/0x0/0x0, omap 0x9675, meta 0x1a2698b), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 18595840 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050a800
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:00.673862+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 64 ms_handle_reset con 0x557a0050a800 session 0x5579fce74fc0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 18595840 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cd000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:01.674004+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fcc35000/0x0/0x4ffc00000, data 0x15271af/0x1593000, compress 0x0/0x0/0x0, omap 0x9b7c, meta 0x1a26484), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 65 ms_handle_reset con 0x5579fd8cd000 session 0x5579fde80e00
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 18587648 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cd000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 65 ms_handle_reset con 0x5579fd8cd000 session 0x5579fce75880
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050a800
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 65 heartbeat osd_stat(store_statfs(0x4fcc35000/0x0/0x4ffc00000, data 0x15271af/0x1593000, compress 0x0/0x0/0x0, omap 0x9b7c, meta 0x1a26484), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff282c00
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 66 ms_handle_reset con 0x5579ff282c00 session 0x5579fce75180
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:02.674170+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff283000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 18382848 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 723773 data_alloc: 218103808 data_used: 8154
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:03.674361+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.428085327s of 10.007885933s, submitted: 151
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 67 ms_handle_reset con 0x5579ff283000 session 0x5579fd0b7180
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 67 ms_handle_reset con 0x557a0050a800 session 0x5579fcc7a8c0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 67 ms_handle_reset con 0x5579ff11e400 session 0x5579fd0c2700
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 17211392 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 67 ms_handle_reset con 0x5579ff11e400 session 0x5579ff117880
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff283000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:04.674482+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 68 ms_handle_reset con 0x5579ff283000 session 0x5579fc4c7340
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17309696 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fcc29000/0x0/0x4ffc00000, data 0x152cc08/0x15a3000, compress 0x0/0x0/0x0, omap 0xab60, meta 0x1a254a0), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:05.674617+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17309696 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:06.674776+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17309696 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 68 handle_osd_map epochs [68,69], i have 68, src has [1,69]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:07.674972+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17309696 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 740385 data_alloc: 218103808 data_used: 8154
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff282c00
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:08.675207+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17268736 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 70 ms_handle_reset con 0x5579ff282c00 session 0x5579fc4c6a80
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:09.675338+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 70 heartbeat osd_stat(store_statfs(0x4fcc1c000/0x0/0x4ffc00000, data 0x1530c9f/0x15ab000, compress 0x0/0x0/0x0, omap 0xb303, meta 0x1a24cfd), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17227776 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:10.675479+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 71 ms_handle_reset con 0x5579fd8cc400 session 0x5579fef88700
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 17178624 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc800
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:11.675861+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 17170432 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 72 heartbeat osd_stat(store_statfs(0x4fba7f000/0x0/0x4ffc00000, data 0x1531850/0x15aa000, compress 0x0/0x0/0x0, omap 0xb613, meta 0x2bc49ed), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 72 ms_handle_reset con 0x5579fd8cc800 session 0x5579ff06cfc0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:12.676210+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 17104896 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc800
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 744608 data_alloc: 218103808 data_used: 12215
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:13.676407+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 17096704 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.926726341s of 10.408112526s, submitted: 123
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 73 ms_handle_reset con 0x5579fd8cc800 session 0x5579ff06d340
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:14.676669+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 17088512 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 74 ms_handle_reset con 0x5579fd8cc400 session 0x5579feb95880
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 74 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x1534fb9/0x15b2000, compress 0x0/0x0/0x0, omap 0xbd3f, meta 0x2bc42c1), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:15.677022+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 17063936 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 75 ms_handle_reset con 0x5579ff11e400 session 0x5579fd223500
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:16.677187+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 17022976 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff282c00
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 76 ms_handle_reset con 0x5579ff282c00 session 0x5579ff116000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:17.677383+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff283000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cd000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 77 heartbeat osd_stat(store_statfs(0x4fba70000/0x0/0x4ffc00000, data 0x1537d49/0x15b7000, compress 0x0/0x0/0x0, omap 0xc27b, meta 0x2bc3d85), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 16556032 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 764911 data_alloc: 218103808 data_used: 16788
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 77 heartbeat osd_stat(store_statfs(0x4fba47000/0x0/0x4ffc00000, data 0x155d231/0x15de000, compress 0x0/0x0/0x0, omap 0xc4f1, meta 0x2bc3b0f), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:18.677632+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 16556032 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:19.677860+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 16539648 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050ac00
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:20.678036+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 16367616 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 78 heartbeat osd_stat(store_statfs(0x4fba49000/0x0/0x4ffc00000, data 0x155e836/0x15e1000, compress 0x0/0x0/0x0, omap 0xc835, meta 0x2bc37cb), peers [0,2] op hist [0,1])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 78 ms_handle_reset con 0x557a0050ac00 session 0x5579feb95a40
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:21.678240+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 16318464 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 79 ms_handle_reset con 0x5579fd8cc400 session 0x5579fce74380
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:22.678544+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 16302080 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771120 data_alloc: 218103808 data_used: 26993
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc800
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:23.678777+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 80 ms_handle_reset con 0x5579fd8cc800 session 0x5579fce74c40
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 80 ms_handle_reset con 0x5579ff11e400 session 0x5579fd0c28c0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff282c00
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.784863472s of 10.014824867s, submitted: 147
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 16318464 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 81 ms_handle_reset con 0x5579ff282c00 session 0x5579fcc7bdc0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:24.678989+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 16293888 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050b000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:25.679183+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 82 ms_handle_reset con 0x557a0050b000 session 0x5579fd2236c0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 16228352 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 82 heartbeat osd_stat(store_statfs(0x4fba3e000/0x0/0x4ffc00000, data 0x1563f71/0x15ec000, compress 0x0/0x0/0x0, omap 0xd794, meta 0x2bc286c), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 82 handle_osd_map epochs [83,83], i have 83, src has [1,83]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:26.679346+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 83 ms_handle_reset con 0x5579fd8cc400 session 0x5579ff30bdc0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 16228352 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:27.679621+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 16228352 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 781447 data_alloc: 218103808 data_used: 27012
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:28.679892+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 16228352 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:29.680178+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 16228352 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread fragmentation_score=0.000119 took=0.000017s
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:30.680397+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc800
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 83 ms_handle_reset con 0x5579fd8cc800 session 0x5579ff30ac40
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff282c00
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 83 ms_handle_reset con 0x5579ff282c00 session 0x5579fef88540
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 83 ms_handle_reset con 0x5579ff11e400 session 0x5579ff30b340
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050b400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 83 ms_handle_reset con 0x557a0050b400 session 0x5579fef888c0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 83 ms_handle_reset con 0x5579fd8cc400 session 0x5579fef58fc0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc800
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 83 ms_handle_reset con 0x5579fd8cc800 session 0x5579fef59a40
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 83 ms_handle_reset con 0x5579ff11e400 session 0x5579fef581c0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 16080896 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:31.680561+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 83 heartbeat osd_stat(store_statfs(0x4fba3c000/0x0/0x4ffc00000, data 0x15655b8/0x15f0000, compress 0x0/0x0/0x0, omap 0xda19, meta 0x2bc25e7), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 16080896 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:32.680702+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff282c00
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 84 ms_handle_reset con 0x5579ff282c00 session 0x5579ff2da540
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 15007744 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 787085 data_alloc: 218103808 data_used: 27012
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:33.680884+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050b800
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 15007744 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:34.681040+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 84 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1566ac3/0x15f4000, compress 0x0/0x0/0x0, omap 0xdca2, meta 0x2bc235e), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 15007744 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:35.681212+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 15007744 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:36.681372+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 15007744 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050bc00
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 84 ms_handle_reset con 0x557a0050bc00 session 0x5579ff2f3a40
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.167292595s of 13.318862915s, submitted: 74
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:37.681525+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 14999552 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 787285 data_alloc: 218103808 data_used: 27078
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 84 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1566ac3/0x15f4000, compress 0x0/0x0/0x0, omap 0xdca2, meta 0x2bc235e), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 85 ms_handle_reset con 0x5579fd8cc400 session 0x5579ff2dafc0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc800
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 85 ms_handle_reset con 0x5579fd8cc800 session 0x5579ff2da1c0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 85 ms_handle_reset con 0x5579ff11e400 session 0x5579fef58000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff282c00
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:38.681688+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 85 ms_handle_reset con 0x5579ff282c00 session 0x5579ff30a1c0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050a800
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 85 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1566ac3/0x15f4000, compress 0x0/0x0/0x0, omap 0xdca2, meta 0x2bc235e), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 14794752 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 86 ms_handle_reset con 0x557a0050a800 session 0x5579fef58380
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:39.681913+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 14786560 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 87 ms_handle_reset con 0x5579fd8cc400 session 0x5579fd0c2a80
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:40.682082+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 14696448 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cc800
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 87 ms_handle_reset con 0x5579fd8cc800 session 0x5579fd0b76c0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 87 ms_handle_reset con 0x5579ff11e400 session 0x5579fef88e00
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff282c00
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 87 ms_handle_reset con 0x5579ff282c00 session 0x5579fde81c00
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:41.682280+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 14671872 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 87 heartbeat osd_stat(store_statfs(0x4fba2a000/0x0/0x4ffc00000, data 0x156ad2e/0x1600000, compress 0x0/0x0/0x0, omap 0xe81b, meta 0x2bc17e5), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd903800
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:42.682403+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 87 heartbeat osd_stat(store_statfs(0x4fba2a000/0x0/0x4ffc00000, data 0x156ad2e/0x1600000, compress 0x0/0x0/0x0, omap 0xe81b, meta 0x2bc17e5), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 87 ms_handle_reset con 0x5579fd903800 session 0x5579fd0b6c40
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd903800
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 14671872 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 801919 data_alloc: 218103808 data_used: 31803
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:43.682919+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 88 heartbeat osd_stat(store_statfs(0x4fba2a000/0x0/0x4ffc00000, data 0x156acec/0x15ff000, compress 0x0/0x0/0x0, omap 0xea3c, meta 0x2bc15c4), peers [0,2] op hist [2])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 88 ms_handle_reset con 0x5579fd903800 session 0x5579fde80e00
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 14614528 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:44.683592+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 14614528 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 88 ms_handle_reset con 0x557a0050b800 session 0x5579ff06da40
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:45.683728+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 89 ms_handle_reset con 0x5579ff11e400 session 0x5579fc4c6e00
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 14598144 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:46.683954+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 14598144 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:47.684205+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 89 heartbeat osd_stat(store_statfs(0x4fba25000/0x0/0x4ffc00000, data 0x156d8af/0x1601000, compress 0x0/0x0/0x0, omap 0xf49f, meta 0x2bc0b61), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 89 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.245538712s of 10.530361176s, submitted: 174
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 14598144 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807114 data_alloc: 218103808 data_used: 35716
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:48.684392+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 90 ms_handle_reset con 0x5579ff283000 session 0x5579fef89180
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 90 ms_handle_reset con 0x5579fd8cd000 session 0x5579fd0c2c40
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd8cd000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 90 ms_handle_reset con 0x5579fd8cd000 session 0x5579ff346700
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 14704640 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:49.684725+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 90 heartbeat osd_stat(store_statfs(0x4fba4a000/0x0/0x4ffc00000, data 0x154adb3/0x15e0000, compress 0x0/0x0/0x0, omap 0xf897, meta 0x2bc0769), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 90 heartbeat osd_stat(store_statfs(0x4fba4a000/0x0/0x4ffc00000, data 0x154ad90/0x15df000, compress 0x0/0x0/0x0, omap 0xf89b, meta 0x2bc0765), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 14704640 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:50.685067+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 90 ms_handle_reset con 0x5579ff11e400 session 0x5579ff117880
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff283000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 13631488 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:51.685236+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 90 handle_osd_map epochs [90,91], i have 91, src has [1,91]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 91 ms_handle_reset con 0x5579ff283000 session 0x5579ff3476c0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 13615104 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 91 heartbeat osd_stat(store_statfs(0x4fba48000/0x0/0x4ffc00000, data 0x154c3c1/0x15e2000, compress 0x0/0x0/0x0, omap 0xff15, meta 0x2bc00eb), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:52.685395+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 13615104 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 806746 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:53.685571+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 13606912 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:54.685878+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 13606912 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:55.686212+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 13606912 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:56.686363+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 13606912 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:57.686560+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.008848190s of 10.286990166s, submitted: 92
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 13606912 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba45000/0x0/0x4ffc00000, data 0x154d88d/0x15e5000, compress 0x0/0x0/0x0, omap 0x101b5, meta 0x2bbfe4b), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:58.686750+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 13606912 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:59.686932+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 13606912 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:00.687207+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 13606912 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:01.687416+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:02.687666+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:03.687922+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:04.688195+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:05.688399+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:06.688566+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:07.688707+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15094 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:41 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 29 09:44:41 compute-0 ceph-3fdce3ca-565d-5459-88e8-1ffe58b48437-mgr-compute-0-ucpkkb[75469]: 2026-01-29T09:44:41.463+0000 7f5f5ebc1640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:08.688827+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:09.688966+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:10.689092+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:11.689233+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:12.689358+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:13.689519+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:14.689764+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:15.689907+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:16.690042+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:17.690207+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:18.690334+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:19.690498+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:20.690682+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:21.690831+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:22.691011+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:23.691191+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:24.691344+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:25.691508+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:26.691629+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:27.691781+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:28.691902+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:29.692047+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:30.692219+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:31.692324+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:32.692468+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:33.692866+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:34.693026+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:35.693163+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:36.693307+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:37.693405+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:38.693532+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:39.693736+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:40.693952+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:41.694083+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:42.694268+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:43.694467+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:44.694622+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:45.694847+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:46.694995+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:47.695188+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:48.695337+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:49.695480+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:50.695626+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:51.695773+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:52.695925+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:53.696089+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:54.696250+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:55.696487+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:56.696643+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:57.696815+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:58.696946+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:59.697076+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:00.697229+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:01.697374+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:02.697470+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:03.697627+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:04.697756+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:05.697902+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:06.698027+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:07.698191+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:08.698306+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:09.698407+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:10.698536+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:11.698665+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 13598720 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:12.698785+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 78807040 unmapped: 13475840 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'config diff' '{prefix=config diff}'
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'config show' '{prefix=config show}'
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'counter dump' '{prefix=counter dump}'
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'counter schema' '{prefix=counter schema}'
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:13.698930+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79200256 unmapped: 13082624 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:14.699050+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 12869632 heap: 92282880 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'log dump' '{prefix=log dump}'
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:15.699213+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 23912448 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'perf dump' '{prefix=perf dump}'
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'perf schema' '{prefix=perf schema}'
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:16.699319+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 23683072 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:17.699483+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 23683072 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:18.699606+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 23683072 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:19.699720+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 23683072 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:20.699888+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 23683072 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:21.700007+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 23683072 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:22.700151+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 23683072 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:23.700324+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 23683072 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:24.700483+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 23683072 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:25.700667+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 23683072 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:26.700837+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 23674880 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:27.700956+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 23674880 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:28.701071+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 23674880 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:29.701219+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 23674880 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:30.701405+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 23674880 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:31.701586+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 23666688 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:32.701745+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 23666688 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:33.701930+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 23666688 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:34.702074+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 23666688 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:35.702269+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 23666688 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:36.702459+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 23666688 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:37.702602+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 23666688 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:38.702814+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 23666688 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:39.702983+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 23666688 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:40.703154+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 23666688 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:41.703358+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 23666688 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:42.703565+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 23666688 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:43.703776+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 23666688 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:44.703938+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 23666688 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:45.704092+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 23666688 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:46.704232+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 23666688 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:47.704406+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 23666688 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:48.704561+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 23666688 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:49.704766+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 23658496 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:50.704966+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 23658496 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:51.705179+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 23658496 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:52.705366+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 23658496 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:53.705622+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 23658496 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:54.705809+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 23658496 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:55.705971+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 23658496 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:56.706160+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 23658496 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:57.706313+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 23658496 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:58.706463+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 23658496 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:59.706616+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 23658496 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:00.706761+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 23658496 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:01.706912+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 23658496 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:02.707061+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 23658496 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:03.707261+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 23658496 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:04.707401+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 23658496 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:05.707558+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 23658496 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:06.707698+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 23658496 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:07.707849+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 23658496 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:08.708065+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 23658496 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:09.708219+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 23658496 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:10.708377+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 23650304 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:11.708558+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 23650304 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:12.708701+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 23650304 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:13.708884+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 23650304 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:14.709016+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 23650304 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:15.709256+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 23650304 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:16.709393+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 23650304 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:17.709530+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 23650304 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:18.709664+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 23650304 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:19.709815+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:20.709962+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:21.710189+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:22.710337+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:23.710571+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:24.710708+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:25.710911+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:26.711053+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:27.711222+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:28.711370+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:29.711493+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:30.711633+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:31.711742+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:32.711889+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:33.712051+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23633920 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:34.712192+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23633920 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:35.712340+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23633920 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:36.712448+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23633920 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:37.712659+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23633920 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:38.712856+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23633920 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:39.713052+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23633920 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:40.713209+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23633920 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:41.713343+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23633920 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:42.713487+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23633920 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:43.713715+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23633920 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:44.713809+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23633920 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:45.713984+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23633920 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:46.714149+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23633920 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:47.714299+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:48.714446+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:49.714578+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:50.714727+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:51.714867+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:52.715006+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:53.715268+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:54.715429+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:55.715588+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:56.715743+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:57.715955+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:58.716192+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:59.716415+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:00.716590+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:01.716826+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:02.717068+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:03.717342+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:04.717593+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:05.717731+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:06.717894+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:07.718052+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:08.718236+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:09.718396+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:10.718534+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:11.718692+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:12.718880+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:13.719081+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:14.719252+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:15.719393+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:16.719624+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:17.719790+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:18.719995+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:19.720155+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:20.720326+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:21.720471+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:22.720698+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:23.720874+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:24.721040+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:25.721209+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:26.721400+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:27.721530+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:28.721662+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:29.721821+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:30.722050+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:31.722227+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:32.722367+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:33.722732+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:34.722867+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:35.723194+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:36.723349+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:37.723525+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:38.723705+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:39.723861+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:40.724012+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:41.724186+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:42.724421+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:43.724645+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:44.724786+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:45.724964+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:46.725192+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:47.725377+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:48.725585+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:49.725711+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:50.725847+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:51.725994+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:52.726197+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:53.726401+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:54.726594+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:55.726767+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:56.726905+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:57.727028+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:58.727224+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:59.727448+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:00.727679+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:01.727861+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:02.728191+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:03.728493+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:04.728653+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:05.728879+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:06.729078+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:07.729243+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:08.729437+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:09.729630+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:10.729836+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:11.730013+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:12.730256+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:13.730588+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 23609344 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:14.730806+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 23609344 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:15.731046+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 23609344 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:16.731396+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 23609344 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:17.731599+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 23609344 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:18.731806+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 23609344 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:19.731988+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 23609344 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:20.732191+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 23601152 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:21.732373+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 23601152 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:22.732570+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 23601152 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:23.732786+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 23601152 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:24.732973+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 23601152 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:25.733180+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 23601152 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:26.733435+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 23601152 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:27.733632+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 23601152 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:28.733756+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23592960 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:29.733897+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23592960 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:30.734033+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23592960 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:31.734183+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23592960 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:32.734349+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23592960 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:33.734535+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23592960 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:34.734879+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23592960 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:35.735006+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23592960 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:36.735185+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23592960 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:37.735339+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23592960 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:38.735512+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23592960 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:39.735653+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23592960 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:40.735777+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23592960 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:41.735967+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23592960 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:42.736207+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23592960 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:43.736390+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23592960 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:44.736524+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:45.736679+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:46.736885+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:47.737205+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:48.737527+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:49.737658+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:50.737833+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:51.738002+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:52.738170+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:53.738350+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:54.738509+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:55.738697+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:56.738857+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:57.738977+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:58.739190+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:59.739329+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:00.739471+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:01.739659+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:02.739786+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:03.739997+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:04.740207+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:05.740339+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:06.740497+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:07.740673+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23584768 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:08.740837+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 23568384 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:09.740953+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 23568384 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:10.741106+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 23568384 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:11.741249+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 23568384 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:12.741408+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 23568384 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:13.741590+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 23568384 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:14.741745+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 23568384 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:15.741888+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:16.742023+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 23568384 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:17.742194+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 23568384 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:18.742336+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 23568384 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:19.742508+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 23568384 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:20.742656+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 23568384 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:21.742800+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 23568384 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:22.742953+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 23568384 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:23.743208+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 23568384 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:24.743406+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 23568384 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:25.743564+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79765504 unmapped: 23560192 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:26.743773+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 23552000 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:27.743975+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 23552000 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:28.744235+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 23552000 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:29.744364+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 23535616 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:30.744560+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 23535616 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:31.744696+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 23535616 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:32.744871+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 23535616 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:33.745191+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 23535616 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:34.745369+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 23535616 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:35.745528+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 23535616 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:36.745670+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 23535616 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:37.745870+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 23535616 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:38.746041+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 23535616 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:39.746214+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 23535616 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:40.746693+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 23535616 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:41.747038+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 23535616 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:42.747348+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 23535616 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:43.747652+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 23535616 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:44.747969+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 23535616 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:45.748225+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 23535616 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:46.748394+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 23535616 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:47.748538+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 23535616 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:48.748722+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 23535616 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:49.748958+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 23519232 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:50.749189+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 23519232 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:51.749348+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 23519232 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:52.749800+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 23519232 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:53.750790+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 23519232 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:54.750961+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 23519232 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:55.751201+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 23519232 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:56.751398+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 23519232 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:57.751608+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 23519232 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:58.751844+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 23519232 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:59.752074+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 23519232 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:00.752284+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 23519232 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:01.752477+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 23519232 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:02.752671+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 23519232 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:03.752863+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 23519232 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:04.753120+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 23519232 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:05.753258+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 23519232 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:06.753373+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 23519232 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:07.753499+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 23519232 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:08.753658+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 23519232 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:09.753765+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 23502848 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:10.753957+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 23502848 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:11.754169+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 23502848 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:12.754322+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 23502848 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:13.754530+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 23502848 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:14.754691+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 23502848 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:15.754832+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 23502848 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:16.754962+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 23502848 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:17.755065+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 23502848 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:18.755205+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 23502848 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:19.755685+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 23494656 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:20.755813+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 23494656 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:21.755941+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 23494656 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:22.756083+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 23494656 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:23.756293+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 23494656 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:24.756446+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 23494656 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:25.756632+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 23494656 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:26.756797+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 23494656 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:27.757002+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 23494656 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:28.757182+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 23486464 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:29.757314+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:30.757468+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:31.757694+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:32.757887+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:33.758118+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:34.758299+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:35.758486+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:36.758655+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:37.758832+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:38.759045+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:39.759200+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:40.759347+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:41.759553+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:42.759724+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:43.759913+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:44.760046+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:45.760217+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:46.760403+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:47.760584+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:48.761982+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:49.762217+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:50.762386+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:51.762895+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:52.763112+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:53.763304+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:54.763451+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:55.763599+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:56.763793+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:57.763967+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:58.764167+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:59.764303+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:00.764518+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:01.764690+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:02.764849+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:03.765069+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:04.765215+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:05.765386+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:06.765548+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:07.765742+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:08.765881+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:09.766036+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:10.766226+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:11.766408+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:12.766548+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:13.766781+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:14.766931+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:15.767077+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:16.767263+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23625728 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:17.767432+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:18.767638+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:19.767817+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:20.768001+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:21.768256+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:22.768479+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:23.768674+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:24.768843+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:25.769033+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:26.769196+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:27.769359+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:28.769497+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:29.769646+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:30.769829+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:31.770037+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:32.770314+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:33.770564+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:34.770735+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:35.770909+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:36.771081+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:37.771223+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:38.771375+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:39.771557+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:40.771742+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:41.771924+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:42.772078+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:43.772264+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:44.772420+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:45.772642+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:46.772810+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:47.773050+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:48.773234+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:49.773411+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:50.773619+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:51.773833+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:52.774076+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:53.774383+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:54.774569+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:55.774828+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:56.775348+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:57.775572+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:58.775813+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:59.776095+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:00.776274+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:01.776446+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:02.776667+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 23642112 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:03.776933+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23633920 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:04.777205+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:05.777413+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:06.777638+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:07.777848+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:08.778033+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:09.778236+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:10.778396+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:11.778584+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:12.778818+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:13.779027+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:14.779223+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:15.779418+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:16.779649+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:17.779839+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:18.780028+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:19.780235+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:20.780422+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:21.780674+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:22.780911+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:23.781245+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:24.781527+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:25.781716+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:26.781948+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:27.782264+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1801.9 total, 600.0 interval
                                           Cumulative writes: 6065 writes, 24K keys, 6065 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6065 writes, 1167 syncs, 5.20 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1663 writes, 4565 keys, 1663 commit groups, 1.0 writes per commit group, ingest: 2.52 MB, 0.00 MB/s
                                           Interval WAL: 1663 writes, 727 syncs, 2.29 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:28.782489+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:29.782825+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:30.783045+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:31.783302+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:32.783597+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:33.783822+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:34.784044+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:35.784279+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:36.784601+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:37.784906+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:38.785207+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:39.785417+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:40.785640+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:41.785895+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:42.786249+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23617536 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:43.786574+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 ms_handle_reset con 0x5579fc7bfc00 session 0x5579fb5ec000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x557a0050b800
Jan 29 09:44:41 compute-0 ceph-osd[87035]: mgrc ms_handle_reset ms_handle_reset con 0x5579fd00c400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1795618739
Jan 29 09:44:41 compute-0 ceph-osd[87035]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1795618739,v1:192.168.122.100:6801/1795618739]
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: get_auth_request con 0x557a0050a800 auth_method 0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: mgrc handle_mgr_configure stats_period=5
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:44.786833+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 ms_handle_reset con 0x5579fc7b2400 session 0x5579fd0b6a80
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff282c00
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 ms_handle_reset con 0x5579fd902400 session 0x5579ff30a540
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579ff11e000
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:45.787052+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:46.787310+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:47.787641+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:48.787916+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:49.788322+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:50.788587+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:51.788845+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:52.789098+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:53.789390+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:54.789642+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:55.789902+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:56.790207+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:57.790418+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:58.790624+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:59.790810+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:00.791013+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:01.791244+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:02.791796+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:03.792366+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:04.792684+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:05.792952+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:06.793258+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:07.793543+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:08.793772+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:09.794034+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:10.794276+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:11.794495+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:12.794730+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:13.794973+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:14.795201+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:15.795443+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:16.795676+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:17.795959+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:18.796213+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:19.796512+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:20.796726+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:21.796916+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:22.797077+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:23.797255+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:24.797411+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:25.797596+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:26.797895+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:27.798058+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:28.798283+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:29.798493+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:30.798806+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:31.798983+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:32.799183+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:33.799409+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:34.799620+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:35.799786+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 ms_handle_reset con 0x5579fddc9400 session 0x5579fd0c2e00
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x5579fd902400
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:36.800015+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:37.800208+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:38.800388+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:39.800596+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:40.800780+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:41.800941+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:42.801101+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:43.801406+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:44.801544+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:45.801693+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:46.801858+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:47.802012+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:48.802204+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:49.802412+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:50.802596+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:51.802760+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:52.802943+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:53.803089+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:54.803300+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:55.803469+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:56.803609+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:57.803758+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:58.803907+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 23355392 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:59.804057+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 23347200 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:00.804197+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 23347200 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:01.804347+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 23347200 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:02.804517+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 23347200 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:03.804707+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 23347200 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:04.804855+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 23347200 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:05.804963+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 23347200 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x154ed3d/0x15e8000, compress 0x0/0x0/0x0, omap 0x103a6, meta 0x2bbfc5a), peers [0,2] op hist [])
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:06.805113+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 23347200 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:07.805311+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 23347200 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:08.805463+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 23347200 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'config diff' '{prefix=config diff}'
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'config show' '{prefix=config show}'
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'counter dump' '{prefix=counter dump}'
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'counter schema' '{prefix=counter schema}'
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:09.805611+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 23339008 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:41 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:41 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812146 data_alloc: 218103808 data_used: 33598
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: tick
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 29 09:44:41 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:10.805734+0000)
Jan 29 09:44:41 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 23527424 heap: 103325696 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:41 compute-0 ceph-osd[87035]: do_command 'log dump' '{prefix=log dump}'
Jan 29 09:44:41 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2156848985' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 29 09:44:41 compute-0 ceph-mon[75183]: from='client.15090 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:41 compute-0 ceph-mon[75183]: pgmap v1066: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:41 compute-0 ceph-mon[75183]: from='client.15094 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 29 09:44:42 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2555206318' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 29 09:44:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 29 09:44:42 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1425047638' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 29 09:44:42 compute-0 rsyslogd[998]: imjournal from <np0005600302:ceph-osd>: begin to drop messages due to rate-limiting
Jan 29 09:44:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 29 09:44:42 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2937498822' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 29 09:44:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 29 09:44:42 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2392457456' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 29 09:44:42 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 29 09:44:42 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1133764375' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 29 09:44:42 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2555206318' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 29 09:44:42 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1425047638' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 29 09:44:42 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2937498822' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 29 09:44:42 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2392457456' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 29 09:44:42 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1133764375' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 29 09:44:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1067: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 29 09:44:43 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2327143508' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 29 09:44:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 29 09:44:43 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4240869418' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 29 09:44:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 29 09:44:43 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/795954900' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 29 09:44:43 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 29 09:44:43 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3797637777' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 29 09:44:43 compute-0 ceph-mon[75183]: pgmap v1067: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:43 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2327143508' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 29 09:44:43 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/4240869418' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 29 09:44:43 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/795954900' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 29 09:44:43 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3797637777' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 29 09:44:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 29 09:44:44 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3033978212' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 29 09:44:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:44:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 29 09:44:44 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/387457689' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 29 09:44:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 29 09:44:44 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3362428886' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 29 09:44:44 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 29 09:44:44 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3767939339' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 29 09:44:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 29 09:44:45 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2536778254' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 29 09:44:45 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3033978212' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 29 09:44:45 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/387457689' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 29 09:44:45 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3362428886' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 29 09:44:45 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3767939339' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 29 09:44:45 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/2536778254' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 29 09:44:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1068: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 29 09:44:45 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1682481664' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 29 09:44:45 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15126 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:45 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 29 09:44:45 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1052824979' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1b( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.124768 1 0.000039
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1b( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.124892 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.1b( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.146810 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.c( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.129868 1 0.000039
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.c( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.129988 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.c( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.155310 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.d( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.137554 1 0.000064
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.d( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.137739 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.d( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.162904 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.d( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.144607 1 0.000096
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.d( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.144955 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.d( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.170505 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1e( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.151753 1 0.000033
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1e( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.152234 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1e( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.175759 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.4( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.159058 1 0.000034
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.4( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.159591 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.4( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.184096 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.4( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.166631 1 0.000049
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.4( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.167143 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.4( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.190986 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.7( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.174063 1 0.000031
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.7( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.175230 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.7( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.199053 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.181392 1 0.000041
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.181999 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.206717 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.188636 1 0.000043
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.189220 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.214449 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.5( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.195886 1 0.000028
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.5( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.196519 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.5( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.220727 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.f( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.203401 1 0.000038
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.f( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.204057 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.f( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.229959 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.b( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.210663 1 0.000047
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.b( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.211347 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.b( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.235917 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.2( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.218056 1 0.000042
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.2( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.218794 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.2( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.243240 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.6( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.225491 1 0.000032
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.6( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.226520 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.6( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.251025 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.9( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.232647 1 0.000024
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.9( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.233464 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.9( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.258130 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.17( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.231182 1 0.000069
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.17( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.231468 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.17( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.267657 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1c( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.238324 1 0.000054
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1c( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.238605 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1c( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.271996 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1d( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.245167 1 0.000042
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1d( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.245979 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.1d( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.279261 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.10( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.253162 1 0.000574
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.10( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.253513 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.10( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.289376 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.2( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.259078 1 0.000058
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.2( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.260786 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[6.2( empty lb MIN local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=45) [1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.295557 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.8( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.266567 1 0.000062
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.8( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.268294 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.8( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.301838 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.14( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.273814 1 0.000025
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.14( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.275625 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.14( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.311961 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.12( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 DELETING pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.281210 1 0.000035
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.12( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.283145 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 pg_epoch: 46 pg[4.12( empty lb MIN local-lis/les=41/43 n=0 ec=41/23 lis/c=41/41 les/c/f=43/43/0 sis=45) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.317887 0 0.000000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 1056768 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe154000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:38.667286+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 1040384 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:39.667429+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 1032192 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe154000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:40.667669+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.277514458s of 12.600845337s, submitted: 606
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 1024000 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:41.667805+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 5 sent 3 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:10.726313+0000 osd.0 (osd.0) 4 : cluster [DBG] 4.17 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:10.736734+0000 osd.0 (osd.0) 5 : cluster [DBG] 4.17 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 5)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:10.726313+0000 osd.0 (osd.0) 4 : cluster [DBG] 4.17 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:10.736734+0000 osd.0 (osd.0) 5 : cluster [DBG] 4.17 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 1024000 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 344839 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:42.668014+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 1024000 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:43.668126+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 1015808 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:44.668460+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 7 sent 5 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:13.698477+0000 osd.0 (osd.0) 6 : cluster [DBG] 6.1a scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:13.709036+0000 osd.0 (osd.0) 7 : cluster [DBG] 6.1a scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 983040 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 7)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:13.698477+0000 osd.0 (osd.0) 6 : cluster [DBG] 6.1a scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:13.709036+0000 osd.0 (osd.0) 7 : cluster [DBG] 6.1a scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:45.668752+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 983040 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:46.668888+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 9 sent 7 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:16.642252+0000 osd.0 (osd.0) 8 : cluster [DBG] 4.16 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:16.652077+0000 osd.0 (osd.0) 9 : cluster [DBG] 4.16 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 983040 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 349665 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 9)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:16.642252+0000 osd.0 (osd.0) 8 : cluster [DBG] 4.16 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:16.652077+0000 osd.0 (osd.0) 9 : cluster [DBG] 4.16 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:47.669227+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 983040 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:48.669399+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 974848 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:49.669570+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 11 sent 9 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:19.590183+0000 osd.0 (osd.0) 10 : cluster [DBG] 4.15 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:19.600633+0000 osd.0 (osd.0) 11 : cluster [DBG] 4.15 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61923328 unmapped: 901120 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 11)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:19.590183+0000 osd.0 (osd.0) 10 : cluster [DBG] 4.15 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:19.600633+0000 osd.0 (osd.0) 11 : cluster [DBG] 4.15 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:50.669852+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61923328 unmapped: 901120 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.550309181s of 10.874446869s, submitted: 8
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:51.670108+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 13 sent 11 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:21.600690+0000 osd.0 (osd.0) 12 : cluster [DBG] 6.16 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:21.611242+0000 osd.0 (osd.0) 13 : cluster [DBG] 6.16 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61931520 unmapped: 892928 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 354491 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:52.670646+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 4 last_log 15 sent 13 num 4 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:22.639979+0000 osd.0 (osd.0) 14 : cluster [DBG] 6.10 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:22.664707+0000 osd.0 (osd.0) 15 : cluster [DBG] 6.10 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 13)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:21.600690+0000 osd.0 (osd.0) 12 : cluster [DBG] 6.16 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:21.611242+0000 osd.0 (osd.0) 13 : cluster [DBG] 6.16 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 15)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:22.639979+0000 osd.0 (osd.0) 14 : cluster [DBG] 6.10 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:22.664707+0000 osd.0 (osd.0) 15 : cluster [DBG] 6.10 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61956096 unmapped: 868352 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:53.671673+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 876544 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:54.671855+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 17 sent 15 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:24.614775+0000 osd.0 (osd.0) 16 : cluster [DBG] 6.12 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:24.625397+0000 osd.0 (osd.0) 17 : cluster [DBG] 6.12 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 17)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:24.614775+0000 osd.0 (osd.0) 16 : cluster [DBG] 6.12 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:24.625397+0000 osd.0 (osd.0) 17 : cluster [DBG] 6.12 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 876544 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:55.672070+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 876544 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:56.672258+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 876544 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 359317 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:57.672821+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61956096 unmapped: 868352 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:58.673089+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61956096 unmapped: 868352 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:13:59.673294+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61898752 unmapped: 925696 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:00.673659+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 19 sent 17 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:30.643210+0000 osd.0 (osd.0) 18 : cluster [DBG] 4.c scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:30.653637+0000 osd.0 (osd.0) 19 : cluster [DBG] 4.c scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 909312 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 19)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:30.643210+0000 osd.0 (osd.0) 18 : cluster [DBG] 4.c scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:30.653637+0000 osd.0 (osd.0) 19 : cluster [DBG] 4.c scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:01.673860+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61923328 unmapped: 901120 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 361728 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:02.674074+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.969016075s of 11.074629784s, submitted: 8
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 876544 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:03.674263+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 21 sent 19 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:32.675394+0000 osd.0 (osd.0) 20 : cluster [DBG] 4.0 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:32.685980+0000 osd.0 (osd.0) 21 : cluster [DBG] 4.0 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 876544 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 21)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:32.675394+0000 osd.0 (osd.0) 20 : cluster [DBG] 4.0 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:32.685980+0000 osd.0 (osd.0) 21 : cluster [DBG] 4.0 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:04.674563+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 860160 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:05.674791+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61972480 unmapped: 851968 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:06.674973+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61972480 unmapped: 851968 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 364139 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:07.675170+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 23 sent 21 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:37.651377+0000 osd.0 (osd.0) 22 : cluster [DBG] 6.0 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:37.661822+0000 osd.0 (osd.0) 23 : cluster [DBG] 6.0 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 23)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:37.651377+0000 osd.0 (osd.0) 22 : cluster [DBG] 6.0 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:37.661822+0000 osd.0 (osd.0) 23 : cluster [DBG] 6.0 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 827392 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:08.675469+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 827392 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:09.675696+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62029824 unmapped: 794624 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:10.675966+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62038016 unmapped: 786432 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:11.676115+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62038016 unmapped: 786432 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 366550 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:12.676401+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.968639374s of 10.029688835s, submitted: 4
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 778240 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:13.676579+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 25 sent 23 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:42.705049+0000 osd.0 (osd.0) 24 : cluster [DBG] 6.3 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:42.715683+0000 osd.0 (osd.0) 25 : cluster [DBG] 6.3 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 25)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:42.705049+0000 osd.0 (osd.0) 24 : cluster [DBG] 6.3 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:42.715683+0000 osd.0 (osd.0) 25 : cluster [DBG] 6.3 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 778240 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:14.676819+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:43.684902+0000 osd.0 (osd.0) 26 : cluster [DBG] 4.3 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:43.695405+0000 osd.0 (osd.0) 27 : cluster [DBG] 4.3 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 27)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:43.684902+0000 osd.0 (osd.0) 26 : cluster [DBG] 4.3 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:43.695405+0000 osd.0 (osd.0) 27 : cluster [DBG] 4.3 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 778240 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:15.677176+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62054400 unmapped: 770048 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:16.677304+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62054400 unmapped: 770048 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 371372 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:17.677484+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:47.608243+0000 osd.0 (osd.0) 28 : cluster [DBG] 6.1b scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:47.618892+0000 osd.0 (osd.0) 29 : cluster [DBG] 6.1b scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62070784 unmapped: 753664 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 29)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:47.608243+0000 osd.0 (osd.0) 28 : cluster [DBG] 6.1b scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:47.618892+0000 osd.0 (osd.0) 29 : cluster [DBG] 6.1b scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:18.677725+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:48.652664+0000 osd.0 (osd.0) 30 : cluster [DBG] 4.19 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:48.663384+0000 osd.0 (osd.0) 31 : cluster [DBG] 4.19 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62103552 unmapped: 720896 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:19.677940+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 31)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:48.652664+0000 osd.0 (osd.0) 30 : cluster [DBG] 4.19 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:48.663384+0000 osd.0 (osd.0) 31 : cluster [DBG] 4.19 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62103552 unmapped: 720896 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:20.678117+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62111744 unmapped: 712704 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:21.678410+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:51.621335+0000 osd.0 (osd.0) 32 : cluster [DBG] 6.18 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:51.631704+0000 osd.0 (osd.0) 33 : cluster [DBG] 6.18 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 33)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:51.621335+0000 osd.0 (osd.0) 32 : cluster [DBG] 6.18 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:51.631704+0000 osd.0 (osd.0) 33 : cluster [DBG] 6.18 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 378611 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62119936 unmapped: 704512 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:22.678662+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62128128 unmapped: 696320 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.595091820s of 10.912874222s, submitted: 10
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:23.678862+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:53.618064+0000 osd.0 (osd.0) 34 : cluster [DBG] 6.7 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:14:53.632079+0000 osd.0 (osd.0) 35 : cluster [DBG] 6.7 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 35)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:53.618064+0000 osd.0 (osd.0) 34 : cluster [DBG] 6.7 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:14:53.632079+0000 osd.0 (osd.0) 35 : cluster [DBG] 6.7 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62160896 unmapped: 663552 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:24.679242+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62160896 unmapped: 663552 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:25.679415+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62177280 unmapped: 647168 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:26.679647+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 381022 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 638976 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:27.679907+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 630784 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:28.680038+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 630784 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:29.680215+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 630784 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:30.680413+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62201856 unmapped: 622592 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:31.681012+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 381022 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62201856 unmapped: 622592 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:32.681196+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 606208 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:33.681569+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:02.705817+0000 osd.0 (osd.0) 36 : cluster [DBG] 6.19 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:02.716416+0000 osd.0 (osd.0) 37 : cluster [DBG] 6.19 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 37)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:02.705817+0000 osd.0 (osd.0) 36 : cluster [DBG] 6.19 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:02.716416+0000 osd.0 (osd.0) 37 : cluster [DBG] 6.19 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 606208 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:34.682311+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 606208 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:35.682483+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.091015816s of 12.123208046s, submitted: 4
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62242816 unmapped: 581632 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:36.682643+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:05.741338+0000 osd.0 (osd.0) 38 : cluster [DBG] 4.b scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:05.751970+0000 osd.0 (osd.0) 39 : cluster [DBG] 4.b scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 385846 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62242816 unmapped: 581632 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 39)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:05.741338+0000 osd.0 (osd.0) 38 : cluster [DBG] 4.b scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:05.751970+0000 osd.0 (osd.0) 39 : cluster [DBG] 4.b scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:37.683112+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62251008 unmapped: 573440 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:38.683264+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:07.782524+0000 osd.0 (osd.0) 40 : cluster [DBG] 6.9 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:07.793034+0000 osd.0 (osd.0) 41 : cluster [DBG] 6.9 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 540672 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:39.683426+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 41)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:07.782524+0000 osd.0 (osd.0) 40 : cluster [DBG] 6.9 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:07.793034+0000 osd.0 (osd.0) 41 : cluster [DBG] 6.9 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 532480 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:40.683572+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:09.747018+0000 osd.0 (osd.0) 42 : cluster [DBG] 6.5 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:09.757549+0000 osd.0 (osd.0) 43 : cluster [DBG] 6.5 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 43)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:09.747018+0000 osd.0 (osd.0) 42 : cluster [DBG] 6.5 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:09.757549+0000 osd.0 (osd.0) 43 : cluster [DBG] 6.5 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 516096 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:41.683795+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 393079 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 507904 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:42.683955+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:11.797076+0000 osd.0 (osd.0) 44 : cluster [DBG] 6.a scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:11.807595+0000 osd.0 (osd.0) 45 : cluster [DBG] 6.a scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 45)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:11.797076+0000 osd.0 (osd.0) 44 : cluster [DBG] 6.a scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:11.807595+0000 osd.0 (osd.0) 45 : cluster [DBG] 6.a scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 499712 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:43.684154+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 499712 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:44.684285+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 1540096 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:45.684409+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:14.835070+0000 osd.0 (osd.0) 46 : cluster [DBG] 4.1d scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:14.845624+0000 osd.0 (osd.0) 47 : cluster [DBG] 4.1d scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 47)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:14.835070+0000 osd.0 (osd.0) 46 : cluster [DBG] 4.1d scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:14.845624+0000 osd.0 (osd.0) 47 : cluster [DBG] 4.1d scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62341120 unmapped: 1531904 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:46.684592+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.695914268s of 11.070383072s, submitted: 10
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 397905 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62357504 unmapped: 1515520 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:47.684822+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:16.811785+0000 osd.0 (osd.0) 48 : cluster [DBG] 4.1e scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:16.822264+0000 osd.0 (osd.0) 49 : cluster [DBG] 4.1e scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 49)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:16.811785+0000 osd.0 (osd.0) 48 : cluster [DBG] 4.1e scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:16.822264+0000 osd.0 (osd.0) 49 : cluster [DBG] 4.1e scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 1507328 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:48.685017+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 1507328 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:49.685168+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:18.821809+0000 osd.0 (osd.0) 50 : cluster [DBG] 4.1f scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:18.832367+0000 osd.0 (osd.0) 51 : cluster [DBG] 4.1f scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 51)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:18.821809+0000 osd.0 (osd.0) 50 : cluster [DBG] 4.1f scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:18.832367+0000 osd.0 (osd.0) 51 : cluster [DBG] 4.1f scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 1507328 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:50.685459+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 1499136 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:51.685684+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 400318 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 1499136 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:52.685831+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 1499136 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:53.685981+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:22.753220+0000 osd.0 (osd.0) 52 : cluster [DBG] 4.6 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:22.762738+0000 osd.0 (osd.0) 53 : cluster [DBG] 4.6 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 53)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:22.753220+0000 osd.0 (osd.0) 52 : cluster [DBG] 4.6 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:22.762738+0000 osd.0 (osd.0) 53 : cluster [DBG] 4.6 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 1490944 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:54.686187+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 1490944 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:55.686394+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 1482752 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:56.686666+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 402729 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 1482752 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.841011047s of 10.853298187s, submitted: 6
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:57.686800+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:27.665099+0000 osd.0 (osd.0) 54 : cluster [DBG] 3.12 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:27.675303+0000 osd.0 (osd.0) 55 : cluster [DBG] 3.12 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 55)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:27.665099+0000 osd.0 (osd.0) 54 : cluster [DBG] 3.12 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:27.675303+0000 osd.0 (osd.0) 55 : cluster [DBG] 3.12 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62406656 unmapped: 1466368 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:58.687099+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 1458176 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:14:59.687314+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 1458176 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:00.687534+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62423040 unmapped: 1449984 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:01.687762+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 407555 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62423040 unmapped: 1449984 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:02.687938+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:31.722587+0000 osd.0 (osd.0) 56 : cluster [DBG] 2.11 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:31.732507+0000 osd.0 (osd.0) 57 : cluster [DBG] 2.11 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 1433600 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 57)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:31.722587+0000 osd.0 (osd.0) 56 : cluster [DBG] 2.11 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:31.732507+0000 osd.0 (osd.0) 57 : cluster [DBG] 2.11 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:03.688190+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:32.689571+0000 osd.0 (osd.0) 58 : cluster [DBG] 5.14 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:32.700087+0000 osd.0 (osd.0) 59 : cluster [DBG] 5.14 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 1425408 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 59)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:32.689571+0000 osd.0 (osd.0) 58 : cluster [DBG] 5.14 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:32.700087+0000 osd.0 (osd.0) 59 : cluster [DBG] 5.14 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:04.688424+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 1425408 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:05.688581+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 1417216 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:06.688720+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 412381 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 1417216 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:07.688880+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:36.692314+0000 osd.0 (osd.0) 60 : cluster [DBG] 2.13 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:36.702768+0000 osd.0 (osd.0) 61 : cluster [DBG] 2.13 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.713692665s of 10.046788216s, submitted: 8
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 61)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:36.692314+0000 osd.0 (osd.0) 60 : cluster [DBG] 2.13 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:36.702768+0000 osd.0 (osd.0) 61 : cluster [DBG] 2.13 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62472192 unmapped: 1400832 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:08.689109+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:37.711887+0000 osd.0 (osd.0) 62 : cluster [DBG] 5.15 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:37.722390+0000 osd.0 (osd.0) 63 : cluster [DBG] 5.15 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 63)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:37.711887+0000 osd.0 (osd.0) 62 : cluster [DBG] 5.15 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:37.722390+0000 osd.0 (osd.0) 63 : cluster [DBG] 5.15 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 1392640 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:09.689336+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 1392640 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:10.689644+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:39.714631+0000 osd.0 (osd.0) 64 : cluster [DBG] 3.15 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:39.724682+0000 osd.0 (osd.0) 65 : cluster [DBG] 3.15 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 65)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:39.714631+0000 osd.0 (osd.0) 64 : cluster [DBG] 3.15 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:39.724682+0000 osd.0 (osd.0) 65 : cluster [DBG] 3.15 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62488576 unmapped: 1384448 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:11.689897+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 1 last_log 66 sent 65 num 1 unsent 1 sending 1
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:41.681559+0000 osd.0 (osd.0) 66 : cluster [DBG] 7.1b scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 419620 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62504960 unmapped: 1368064 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 66)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:41.681559+0000 osd.0 (osd.0) 66 : cluster [DBG] 7.1b scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:12.690363+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 1 last_log 67 sent 66 num 1 unsent 1 sending 1
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:41.692056+0000 osd.0 (osd.0) 67 : cluster [DBG] 7.1b scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62504960 unmapped: 1368064 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 67)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:41.692056+0000 osd.0 (osd.0) 67 : cluster [DBG] 7.1b scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:13.690633+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 1 last_log 68 sent 67 num 1 unsent 1 sending 1
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:43.680601+0000 osd.0 (osd.0) 68 : cluster [DBG] 7.13 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 1327104 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 68)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:43.680601+0000 osd.0 (osd.0) 68 : cluster [DBG] 7.13 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:14.690848+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 1 last_log 69 sent 68 num 1 unsent 1 sending 1
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:43.691184+0000 osd.0 (osd.0) 69 : cluster [DBG] 7.13 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 1327104 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 69)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:43.691184+0000 osd.0 (osd.0) 69 : cluster [DBG] 7.13 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:15.691109+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 1318912 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:16.691331+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 422033 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 1318912 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:17.691571+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 1318912 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:18.691702+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 1310720 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:19.691890+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 1310720 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:20.692209+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 1302528 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:21.692379+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 422033 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 1302528 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:22.692551+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 1294336 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:23.692782+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 1286144 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:24.693014+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62595072 unmapped: 1277952 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.342010498s of 17.853366852s, submitted: 8
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:25.693230+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:55.565374+0000 osd.0 (osd.0) 70 : cluster [DBG] 2.16 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:55.575706+0000 osd.0 (osd.0) 71 : cluster [DBG] 2.16 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 71)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:55.565374+0000 osd.0 (osd.0) 70 : cluster [DBG] 2.16 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:55.575706+0000 osd.0 (osd.0) 71 : cluster [DBG] 2.16 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 1269760 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:26.693500+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:56.611736+0000 osd.0 (osd.0) 72 : cluster [DBG] 3.9 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:56.622314+0000 osd.0 (osd.0) 73 : cluster [DBG] 3.9 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 73)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:56.611736+0000 osd.0 (osd.0) 72 : cluster [DBG] 3.9 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:56.622314+0000 osd.0 (osd.0) 73 : cluster [DBG] 3.9 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 426857 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 1269760 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:27.694526+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 1269760 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:28.695289+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:58.613524+0000 osd.0 (osd.0) 74 : cluster [DBG] 2.8 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:15:58.624069+0000 osd.0 (osd.0) 75 : cluster [DBG] 2.8 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 75)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:58.613524+0000 osd.0 (osd.0) 74 : cluster [DBG] 2.8 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:15:58.624069+0000 osd.0 (osd.0) 75 : cluster [DBG] 2.8 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 1228800 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:29.696035+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 1228800 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:30.696647+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:00.594778+0000 osd.0 (osd.0) 76 : cluster [DBG] 3.a scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:00.605458+0000 osd.0 (osd.0) 77 : cluster [DBG] 3.a scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 77)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:00.594778+0000 osd.0 (osd.0) 76 : cluster [DBG] 3.a scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:00.605458+0000 osd.0 (osd.0) 77 : cluster [DBG] 3.a scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 1204224 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:31.696827+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:01.623503+0000 osd.0 (osd.0) 78 : cluster [DBG] 2.b scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:01.634143+0000 osd.0 (osd.0) 79 : cluster [DBG] 2.b scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 79)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:01.623503+0000 osd.0 (osd.0) 78 : cluster [DBG] 2.b scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:01.634143+0000 osd.0 (osd.0) 79 : cluster [DBG] 2.b scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 434090 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 1204224 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:32.697070+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 1196032 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:33.697452+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 1196032 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:34.697774+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 1196032 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.066798210s of 10.089524269s, submitted: 10
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:35.697965+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:05.654967+0000 osd.0 (osd.0) 80 : cluster [DBG] 7.f scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:05.665520+0000 osd.0 (osd.0) 81 : cluster [DBG] 7.f scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 81)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:05.654967+0000 osd.0 (osd.0) 80 : cluster [DBG] 7.f scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:05.665520+0000 osd.0 (osd.0) 81 : cluster [DBG] 7.f scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62693376 unmapped: 1179648 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:36.698242+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:06.613011+0000 osd.0 (osd.0) 82 : cluster [DBG] 5.3 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:06.623202+0000 osd.0 (osd.0) 83 : cluster [DBG] 5.3 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 83)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:06.613011+0000 osd.0 (osd.0) 82 : cluster [DBG] 5.3 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:06.623202+0000 osd.0 (osd.0) 83 : cluster [DBG] 5.3 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 438912 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 1163264 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:37.698493+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:07.589698+0000 osd.0 (osd.0) 84 : cluster [DBG] 3.6 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:07.600265+0000 osd.0 (osd.0) 85 : cluster [DBG] 3.6 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 85)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:07.589698+0000 osd.0 (osd.0) 84 : cluster [DBG] 3.6 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:07.600265+0000 osd.0 (osd.0) 85 : cluster [DBG] 3.6 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 1155072 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:38.698747+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 1155072 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:39.698967+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 1155072 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:40.699218+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62726144 unmapped: 1146880 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:41.699383+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 441323 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62726144 unmapped: 1146880 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:42.699589+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62734336 unmapped: 1138688 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:43.699733+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62734336 unmapped: 1138688 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:44.699864+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 1130496 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:45.700069+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:15.526651+0000 osd.0 (osd.0) 86 : cluster [DBG] 5.5 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:15.537266+0000 osd.0 (osd.0) 87 : cluster [DBG] 5.5 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 87)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:15.526651+0000 osd.0 (osd.0) 86 : cluster [DBG] 5.5 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:15.537266+0000 osd.0 (osd.0) 87 : cluster [DBG] 5.5 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.836798668s of 10.853741646s, submitted: 8
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 1122304 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:46.700389+0000)
Jan 29 09:44:45 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15130 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:16.508694+0000 osd.0 (osd.0) 88 : cluster [DBG] 5.2 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:16.519266+0000 osd.0 (osd.0) 89 : cluster [DBG] 5.2 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 89)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:16.508694+0000 osd.0 (osd.0) 88 : cluster [DBG] 5.2 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:16.519266+0000 osd.0 (osd.0) 89 : cluster [DBG] 5.2 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 448558 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:47.700540+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:17.523024+0000 osd.0 (osd.0) 90 : cluster [DBG] 2.1f scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:17.533562+0000 osd.0 (osd.0) 91 : cluster [DBG] 2.1f scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 1114112 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 91)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:17.523024+0000 osd.0 (osd.0) 90 : cluster [DBG] 2.1f scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:17.533562+0000 osd.0 (osd.0) 91 : cluster [DBG] 2.1f scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:48.700777+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 1105920 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:49.701348+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:19.499733+0000 osd.0 (osd.0) 92 : cluster [DBG] 3.3 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:19.510414+0000 osd.0 (osd.0) 93 : cluster [DBG] 3.3 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 1089536 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 93)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:19.499733+0000 osd.0 (osd.0) 92 : cluster [DBG] 3.3 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:19.510414+0000 osd.0 (osd.0) 93 : cluster [DBG] 3.3 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:50.701577+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:20.509245+0000 osd.0 (osd.0) 94 : cluster [DBG] 2.2 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:20.519683+0000 osd.0 (osd.0) 95 : cluster [DBG] 2.2 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1073152 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 95)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:20.509245+0000 osd.0 (osd.0) 94 : cluster [DBG] 2.2 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:20.519683+0000 osd.0 (osd.0) 95 : cluster [DBG] 2.2 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:51.702016+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:21.486570+0000 osd.0 (osd.0) 96 : cluster [DBG] 2.f scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:21.496997+0000 osd.0 (osd.0) 97 : cluster [DBG] 2.f scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1056768 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 97)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:21.486570+0000 osd.0 (osd.0) 96 : cluster [DBG] 2.f scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:21.496997+0000 osd.0 (osd.0) 97 : cluster [DBG] 2.f scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 458202 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:52.702719+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:22.525232+0000 osd.0 (osd.0) 98 : cluster [DBG] 7.6 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:22.535213+0000 osd.0 (osd.0) 99 : cluster [DBG] 7.6 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 1048576 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 99)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:22.525232+0000 osd.0 (osd.0) 98 : cluster [DBG] 7.6 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:22.535213+0000 osd.0 (osd.0) 99 : cluster [DBG] 7.6 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:53.702969+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 1048576 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:54.703122+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:24.525868+0000 osd.0 (osd.0) 100 : cluster [DBG] 2.1c scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:24.536642+0000 osd.0 (osd.0) 101 : cluster [DBG] 2.1c scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 1024000 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 101)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:24.525868+0000 osd.0 (osd.0) 100 : cluster [DBG] 2.1c scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:24.536642+0000 osd.0 (osd.0) 101 : cluster [DBG] 2.1c scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:55.703360+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 1024000 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:56.703523+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 1024000 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.038551331s of 11.073991776s, submitted: 14
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 463028 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:57.703667+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:27.582697+0000 osd.0 (osd.0) 102 : cluster [DBG] 7.18 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:27.592843+0000 osd.0 (osd.0) 103 : cluster [DBG] 7.18 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 1007616 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 103)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:27.582697+0000 osd.0 (osd.0) 102 : cluster [DBG] 7.18 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:27.592843+0000 osd.0 (osd.0) 103 : cluster [DBG] 7.18 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:58.704225+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 999424 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:15:59.704854+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 999424 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:00.705072+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 999424 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:01.705292+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:31.631444+0000 osd.0 (osd.0) 104 : cluster [DBG] 5.4 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:31.642175+0000 osd.0 (osd.0) 105 : cluster [DBG] 5.4 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 983040 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 105)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:31.631444+0000 osd.0 (osd.0) 104 : cluster [DBG] 5.4 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:31.642175+0000 osd.0 (osd.0) 105 : cluster [DBG] 5.4 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 465439 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:02.705564+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 983040 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:03.705739+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 974848 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:04.706401+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:34.594307+0000 osd.0 (osd.0) 106 : cluster [DBG] 5.7 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:34.604737+0000 osd.0 (osd.0) 107 : cluster [DBG] 5.7 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 958464 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 107)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:34.594307+0000 osd.0 (osd.0) 106 : cluster [DBG] 5.7 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:34.604737+0000 osd.0 (osd.0) 107 : cluster [DBG] 5.7 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:05.706672+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:35.627478+0000 osd.0 (osd.0) 108 : cluster [DBG] 7.9 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:35.638085+0000 osd.0 (osd.0) 109 : cluster [DBG] 7.9 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 942080 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 109)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:35.627478+0000 osd.0 (osd.0) 108 : cluster [DBG] 7.9 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:35.638085+0000 osd.0 (osd.0) 109 : cluster [DBG] 7.9 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:06.707265+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:36.648220+0000 osd.0 (osd.0) 110 : cluster [DBG] 3.1 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:36.658744+0000 osd.0 (osd.0) 111 : cluster [DBG] 3.1 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 917504 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 111)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:36.648220+0000 osd.0 (osd.0) 110 : cluster [DBG] 3.1 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:36.658744+0000 osd.0 (osd.0) 111 : cluster [DBG] 3.1 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 472672 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:07.708059+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 917504 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.989176750s of 11.068286896s, submitted: 10
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:08.708282+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:38.651017+0000 osd.0 (osd.0) 112 : cluster [DBG] 7.3 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:38.661599+0000 osd.0 (osd.0) 113 : cluster [DBG] 7.3 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 909312 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 113)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:38.651017+0000 osd.0 (osd.0) 112 : cluster [DBG] 7.3 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:38.661599+0000 osd.0 (osd.0) 113 : cluster [DBG] 7.3 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:09.708854+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 884736 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:10.709090+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:40.678674+0000 osd.0 (osd.0) 114 : cluster [DBG] 2.1d scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:40.689235+0000 osd.0 (osd.0) 115 : cluster [DBG] 2.1d scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 868352 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 115)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:40.678674+0000 osd.0 (osd.0) 114 : cluster [DBG] 2.1d scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:40.689235+0000 osd.0 (osd.0) 115 : cluster [DBG] 2.1d scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:11.709951+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 860160 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:12.710380+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 477496 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 860160 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:13.710755+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 843776 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:14.711104+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 4 last_log 119 sent 115 num 4 unsent 4 sending 4
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:43.723918+0000 osd.0 (osd.0) 116 : cluster [DBG] 3.c scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:43.734673+0000 osd.0 (osd.0) 117 : cluster [DBG] 3.c scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:44.677672+0000 osd.0 (osd.0) 118 : cluster [DBG] 7.1f scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:44.688248+0000 osd.0 (osd.0) 119 : cluster [DBG] 7.1f scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 827392 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 119)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:43.723918+0000 osd.0 (osd.0) 116 : cluster [DBG] 3.c scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:43.734673+0000 osd.0 (osd.0) 117 : cluster [DBG] 3.c scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:44.677672+0000 osd.0 (osd.0) 118 : cluster [DBG] 7.1f scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:44.688248+0000 osd.0 (osd.0) 119 : cluster [DBG] 7.1f scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:15.711401+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 827392 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:16.711609+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 827392 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:17.712128+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 482320 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 819200 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.023255348s of 10.042103767s, submitted: 8
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:18.712305+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:48.693242+0000 osd.0 (osd.0) 120 : cluster [DBG] 3.1b scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:48.703955+0000 osd.0 (osd.0) 121 : cluster [DBG] 3.1b scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 819200 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 121)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:48.693242+0000 osd.0 (osd.0) 120 : cluster [DBG] 3.1b scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:48.703955+0000 osd.0 (osd.0) 121 : cluster [DBG] 3.1b scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:19.712753+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 802816 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:20.713211+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 794624 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:21.713382+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:51.652897+0000 osd.0 (osd.0) 122 : cluster [DBG] 2.18 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:51.663463+0000 osd.0 (osd.0) 123 : cluster [DBG] 2.18 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 770048 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 123)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:51.652897+0000 osd.0 (osd.0) 122 : cluster [DBG] 2.18 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:51.663463+0000 osd.0 (osd.0) 123 : cluster [DBG] 2.18 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:22.713661+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:52.665552+0000 osd.0 (osd.0) 124 : cluster [DBG] 7.4 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:52.676049+0000 osd.0 (osd.0) 125 : cluster [DBG] 7.4 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 489557 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 761856 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 125)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:52.665552+0000 osd.0 (osd.0) 124 : cluster [DBG] 7.4 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:52.676049+0000 osd.0 (osd.0) 125 : cluster [DBG] 7.4 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:23.714000+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 761856 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:24.714166+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 753664 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:25.714494+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 753664 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:26.714763+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:55.724963+0000 osd.0 (osd.0) 126 : cluster [DBG] 5.1e scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:55.735509+0000 osd.0 (osd.0) 127 : cluster [DBG] 5.1e scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 745472 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 127)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:55.724963+0000 osd.0 (osd.0) 126 : cluster [DBG] 5.1e scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:55.735509+0000 osd.0 (osd.0) 127 : cluster [DBG] 5.1e scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:27.715040+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 4 last_log 131 sent 127 num 4 unsent 4 sending 4
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:56.720704+0000 osd.0 (osd.0) 128 : cluster [DBG] 2.19 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:56.731161+0000 osd.0 (osd.0) 129 : cluster [DBG] 2.19 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:57.685567+0000 osd.0 (osd.0) 130 : cluster [DBG] 3.f scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:16:57.696265+0000 osd.0 (osd.0) 131 : cluster [DBG] 3.f scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 496794 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 737280 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 131)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:56.720704+0000 osd.0 (osd.0) 128 : cluster [DBG] 2.19 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:56.731161+0000 osd.0 (osd.0) 129 : cluster [DBG] 2.19 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:57.685567+0000 osd.0 (osd.0) 130 : cluster [DBG] 3.f scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:16:57.696265+0000 osd.0 (osd.0) 131 : cluster [DBG] 3.f scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:28.715287+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 729088 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:29.715441+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 704512 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:30.715704+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 704512 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:31.715991+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 696320 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:32.716793+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 496794 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 696320 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:33.717279+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.012383461s of 15.082575798s, submitted: 12
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 696320 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:34.717777+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:17:03.775798+0000 osd.0 (osd.0) 132 : cluster [DBG] 3.17 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  will send 2026-01-29T09:17:03.786385+0000 osd.0 (osd.0) 133 : cluster [DBG] 3.17 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 688128 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client handle_log_ack log(last 133)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:17:03.775798+0000 osd.0 (osd.0) 132 : cluster [DBG] 3.17 scrub starts
Jan 29 09:44:45 compute-0 ceph-osd[86001]: log_client  logged 2026-01-29T09:17:03.786385+0000 osd.0 (osd.0) 133 : cluster [DBG] 3.17 scrub ok
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:35.718114+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 688128 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:36.718562+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 688128 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:37.718720+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 679936 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:38.719097+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 679936 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:39.719523+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 663552 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:40.720269+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 663552 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:41.720697+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 663552 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:42.721053+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 655360 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:43.721306+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 655360 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:44.721675+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 647168 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:45.721924+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 647168 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:46.722088+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 647168 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:47.722269+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 638976 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:48.722545+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 638976 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:49.722738+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 622592 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:50.722925+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 614400 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:51.723206+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 614400 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:52.723437+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 614400 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:53.723644+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 606208 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:54.724043+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 606208 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:55.724302+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 589824 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:56.724481+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 589824 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:57.724876+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 581632 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:58.725055+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 573440 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:16:59.725278+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 573440 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:00.725564+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 565248 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:01.725841+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 565248 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:02.726006+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 565248 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:03.726229+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 557056 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:04.726823+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 557056 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:05.727286+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 548864 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:06.727583+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 548864 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:07.727728+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 532480 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:08.727883+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 516096 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:09.728171+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 516096 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:10.728568+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 507904 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:11.728720+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 507904 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:12.728868+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 507904 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:13.729202+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 499712 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:14.729418+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 499712 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:15.729656+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 491520 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:16.729816+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 491520 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:17.730023+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 491520 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:18.730213+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 483328 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:19.730506+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 483328 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:20.730805+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 483328 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:21.730996+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 466944 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:22.731256+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 466944 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:23.731476+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 458752 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:24.731662+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 458752 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:25.731928+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 442368 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:26.732124+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 434176 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:27.732691+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 434176 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:28.732952+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 417792 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:29.733205+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 409600 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:30.733438+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 409600 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:31.733611+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 401408 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:32.733776+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 401408 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:33.733955+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 401408 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:34.734235+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 393216 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:35.734394+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 393216 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:36.734581+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 385024 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:37.734750+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 385024 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:38.734948+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 393216 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:39.735113+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 385024 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:40.735378+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 385024 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:41.735534+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 376832 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:42.735694+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 376832 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:43.735901+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 376832 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:44.736581+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 368640 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:45.736751+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 360448 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:46.736952+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 352256 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:47.737175+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 352256 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:48.737352+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 352256 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:49.737493+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 344064 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:50.737685+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 344064 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:51.737849+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 344064 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:52.738099+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 335872 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:53.738207+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 335872 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:54.738358+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 327680 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:55.738526+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 327680 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:56.738670+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 327680 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:57.738824+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 319488 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:58.739058+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 319488 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:17:59.739236+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 311296 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:00.739421+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 311296 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:01.739566+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:02.739737+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 311296 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:03.739909+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 303104 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:04.740079+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 303104 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:05.740259+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 294912 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:06.740524+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 294912 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:07.740745+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 294912 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:08.740965+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 286720 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:09.741099+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 286720 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:10.741290+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 286720 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:11.741422+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 278528 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:12.741625+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 278528 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:13.741814+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 262144 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:14.742096+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 262144 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:15.742303+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 262144 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:16.742495+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 253952 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:17.742707+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 253952 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:18.742974+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 253952 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:19.743270+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 245760 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:20.743534+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 245760 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:21.743797+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 237568 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:22.744013+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 237568 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:23.744237+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 237568 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:24.744409+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 229376 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:25.744588+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 229376 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:26.744908+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 229376 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:27.745107+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 221184 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:28.745256+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 221184 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:29.745423+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 221184 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:30.745717+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 212992 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:31.745900+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 212992 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:32.746115+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 204800 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:33.746308+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 204800 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:34.746516+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 204800 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:35.746730+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 196608 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:36.746923+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 196608 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:37.747101+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 188416 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:38.747284+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 188416 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:39.747450+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 188416 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:40.747631+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 180224 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:41.747763+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 180224 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:42.747899+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 180224 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:43.748053+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 172032 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:44.748227+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 172032 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:45.748455+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 172032 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:46.748649+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 163840 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:47.748834+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 163840 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:48.748985+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 155648 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:49.749233+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 147456 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:50.749457+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 147456 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:51.749811+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 139264 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:52.749965+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 139264 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:53.750113+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 131072 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:54.750293+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 131072 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:55.750439+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 131072 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:56.750649+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 122880 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:57.750834+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 122880 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:58.751030+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 122880 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:18:59.751197+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 114688 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:00.751473+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 114688 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:01.751633+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 106496 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:02.751954+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 106496 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:03.752183+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 106496 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:04.752355+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 98304 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:05.752592+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 98304 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:06.752856+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 90112 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:07.753077+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 90112 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:08.753315+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 90112 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:09.753533+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 81920 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:10.753750+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 81920 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:11.753975+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 81920 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:12.754211+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 73728 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:13.754364+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 73728 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:14.754587+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 65536 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:15.754882+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 65536 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:16.755050+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 65536 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:17.755241+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 57344 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:18.755439+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 57344 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:19.755579+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 57344 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:20.755787+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 49152 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:21.756000+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 49152 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:22.756236+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 40960 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:23.756459+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 40960 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:24.756669+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 40960 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:25.756839+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 32768 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:26.757030+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 32768 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:27.757263+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 24576 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:28.757495+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 24576 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:29.757640+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 24576 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:30.757905+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 16384 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:31.758077+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 16384 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:32.758269+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 16384 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:33.758482+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 8192 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:34.758635+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 8192 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:35.758828+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 0 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:36.758961+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 0 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:37.759123+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 1040384 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:38.759318+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 1040384 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:39.759480+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 1040384 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:40.759687+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 1032192 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:41.759881+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 1032192 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:42.760027+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 1032192 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:43.760236+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 1024000 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:44.760406+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 1024000 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:45.760642+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 1015808 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:46.760833+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 1015808 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:47.761047+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 1015808 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:48.761249+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 1007616 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:49.761419+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 1007616 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:50.761616+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 999424 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:51.761776+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 999424 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:52.761973+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 999424 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:53.762109+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 991232 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:54.762215+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 991232 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:55.762441+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 983040 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:56.762750+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 983040 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:57.762910+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 983040 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:58.763084+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 974848 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:19:59.763201+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 974848 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:00.763390+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 966656 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:01.763522+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 966656 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:02.763667+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 966656 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:03.763820+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 958464 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:04.764028+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 958464 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:05.764200+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 958464 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:06.764352+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 950272 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:07.764509+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 950272 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:08.764728+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 950272 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:09.764894+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 942080 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:10.765057+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 942080 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:11.765202+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 933888 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:12.765331+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 933888 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:13.765503+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 925696 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:14.765720+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 925696 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:15.765924+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 925696 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:16.766079+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 917504 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:17.766208+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 917504 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:18.766346+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 917504 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:19.766563+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 909312 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:20.766760+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 909312 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:21.766918+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 909312 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:22.767074+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 901120 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:23.767250+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 901120 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:24.767387+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 892928 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:25.767558+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 892928 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:26.767682+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 884736 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:27.767807+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 876544 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:28.767938+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 876544 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:29.768070+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 868352 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:30.768472+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 868352 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:31.768629+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 868352 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:32.768789+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 860160 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:33.768996+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 860160 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:34.769251+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 860160 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:35.769429+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 851968 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:36.769615+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 851968 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:37.769804+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 843776 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:38.770055+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 843776 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:39.770294+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 843776 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:40.770532+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 835584 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:41.770729+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 835584 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:42.770902+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 835584 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:43.771056+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 827392 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:44.771256+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 827392 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:45.771490+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 819200 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:46.771709+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 819200 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:47.771900+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 819200 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:48.772102+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 811008 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:49.772331+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 811008 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:50.772526+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 802816 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:51.772701+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 802816 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:52.772907+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 802816 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:53.773126+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 794624 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:54.773304+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 794624 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:55.773515+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 794624 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:56.773692+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 786432 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:57.773842+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 786432 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:58.773986+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 786432 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:20:59.774145+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 778240 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:00.774348+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 778240 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:01.774505+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 770048 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:02.774693+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 770048 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:03.774857+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 770048 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:04.775024+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 761856 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:05.775201+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 761856 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:06.775362+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 761856 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:07.775491+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 753664 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:08.775667+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 753664 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:09.775861+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 745472 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:10.776059+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 745472 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:11.776223+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 745472 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:12.776369+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 737280 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:13.776523+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 737280 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:14.776699+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 737280 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:15.776872+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 729088 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:16.777008+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 729088 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:17.777207+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 720896 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:18.777343+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 720896 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:19.777482+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 720896 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:20.777673+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 712704 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:21.777822+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 712704 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:22.777951+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 712704 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:23.778090+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 704512 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:24.778300+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 704512 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:25.778574+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 696320 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:26.778685+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 696320 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:27.778890+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 696320 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:28.779097+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 688128 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:29.779348+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 688128 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:30.779577+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 679936 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:31.779816+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 679936 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:32.779985+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 679936 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:33.780189+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 671744 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:34.780378+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 671744 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:35.780589+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 663552 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:36.780802+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 663552 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:37.781066+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 663552 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:38.781248+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 655360 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:39.781497+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 655360 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:40.781791+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 655360 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:41.782053+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 647168 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:42.782456+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 647168 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:43.782623+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 638976 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:44.782894+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 638976 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:45.783163+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 638976 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:46.783369+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 630784 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:47.783550+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 630784 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:48.783788+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 622592 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:49.784080+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 622592 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:50.784372+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 622592 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:51.784609+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 614400 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:52.784815+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 614400 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:53.784984+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 614400 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:54.785120+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 606208 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:55.785251+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 606208 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:56.785915+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 606208 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:57.786069+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 598016 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:58.786185+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 598016 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:21:59.786346+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 589824 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:00.786598+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 589824 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:01.786750+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 581632 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:02.786916+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 581632 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:03.787059+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 581632 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:04.787226+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64348160 unmapped: 573440 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:05.787357+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64348160 unmapped: 573440 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:06.787556+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64348160 unmapped: 573440 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:07.787681+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 565248 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:08.787795+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 565248 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:09.787929+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 565248 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:10.788104+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 557056 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:11.788281+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 557056 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:12.788464+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 557056 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:13.788671+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 548864 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:14.788816+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 548864 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:15.788989+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4241 writes, 19K keys, 4241 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4241 writes, 370 syncs, 11.46 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4241 writes, 19K keys, 4241 commit groups, 1.0 writes per commit group, ingest: 15.92 MB, 0.03 MB/s
                                           Interval WAL: 4241 writes, 370 syncs, 11.46 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 466944 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:16.789172+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 466944 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:17.789329+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 466944 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:18.789485+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 458752 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:19.790205+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 458752 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:20.790400+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 450560 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:21.791080+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 450560 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:22.791229+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 450560 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:23.791382+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 442368 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:24.791540+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 442368 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:25.791699+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 442368 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:26.791866+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 434176 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:27.792020+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 434176 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:28.792175+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:29.792416+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 425984 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:30.792727+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 425984 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:31.792916+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 425984 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:32.793185+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 417792 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:33.793392+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 417792 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:34.793567+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 409600 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:35.793723+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 409600 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:36.793891+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 409600 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:37.794031+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 401408 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:38.794190+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 401408 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:39.794341+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 401408 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:40.794565+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 393216 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:41.794697+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 393216 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:42.794849+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 393216 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:43.795035+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 385024 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:44.795272+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 385024 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:45.795424+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 376832 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:46.795582+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 376832 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:47.795736+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 376832 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:48.795891+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 368640 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:49.796047+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 368640 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:50.796256+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 360448 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:51.796434+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 360448 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:52.796600+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 360448 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:53.796741+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 352256 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:54.796923+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 352256 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:55.797126+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 352256 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:56.797319+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 344064 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:57.797484+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 344064 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:58.797639+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 335872 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:22:59.797781+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 335872 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:00.797950+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 335872 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:01.798083+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 327680 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:02.798425+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 327680 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:03.798625+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 319488 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:04.798775+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 319488 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:05.798972+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 319488 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:06.799086+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 311296 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:07.799231+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 311296 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:08.799414+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 311296 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:09.800072+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 303104 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:10.800314+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 303104 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:11.800469+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 294912 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:12.800603+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 294912 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:13.800732+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 294912 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:14.800862+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 286720 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:15.800997+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 286720 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:16.801162+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 286720 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:17.801267+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 278528 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:18.801413+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 278528 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:19.801561+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 270336 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:20.801747+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 270336 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:21.801877+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 270336 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:22.801996+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 262144 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:23.802233+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 262144 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:24.802385+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 253952 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:25.802562+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 253952 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:26.802722+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 253952 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:27.802914+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 245760 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:28.803075+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 245760 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:29.803194+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 245760 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:30.803352+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 237568 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:31.803498+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 237568 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:32.803675+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 229376 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:33.803820+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 229376 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:34.804004+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 229376 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:35.804152+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 221184 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:36.804299+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 221184 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:37.804493+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 221184 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:38.804653+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 212992 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:39.804781+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 212992 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:40.804933+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 204800 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:41.805059+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 204800 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:42.805209+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 204800 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:43.805338+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 196608 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:44.805533+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 196608 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:45.805662+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 188416 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:46.806072+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 188416 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:47.806342+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 188416 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:48.806467+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 180224 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:49.806586+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 180224 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:50.806737+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 180224 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:51.806865+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 172032 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:52.806981+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 172032 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:53.807108+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 172032 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:54.807285+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 163840 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:55.807432+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 163840 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:56.807614+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64765952 unmapped: 155648 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:57.807803+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64765952 unmapped: 155648 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:58.807954+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64765952 unmapped: 155648 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:23:59.808171+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 147456 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:00.808367+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 147456 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:01.808497+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64782336 unmapped: 139264 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:02.808604+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64782336 unmapped: 139264 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:03.808756+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64782336 unmapped: 139264 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:04.808912+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 131072 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:05.809778+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 131072 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:06.809991+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 122880 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:07.810239+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 122880 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:08.810416+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 122880 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:09.810587+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 114688 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:10.810825+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 114688 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:11.811006+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 114688 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:12.811201+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 106496 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:13.811437+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 106496 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:14.811625+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 98304 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:15.811803+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 98304 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:16.812112+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 98304 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:17.812316+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:18.812487+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:19.812650+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:20.812913+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:21.813087+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:22.813209+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:23.813394+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:24.813659+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:25.813807+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:26.813981+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:27.814101+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:28.814302+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:29.814502+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:30.814707+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:31.814865+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:32.815034+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:33.815226+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:34.815392+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:35.815535+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:36.815668+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:37.815872+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:38.816080+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:39.816256+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:40.816448+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:41.816626+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:42.816772+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:43.816955+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:44.817084+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:45.817221+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:46.817376+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:47.817558+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:48.817691+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:49.817856+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:50.818067+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:51.818194+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:52.818358+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:53.818716+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:54.818865+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:55.819055+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:56.819240+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:57.819391+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:58.819550+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:24:59.819699+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:00.819879+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:01.820038+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:02.820218+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:03.820433+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:04.820596+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:05.820746+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:06.820912+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:07.821092+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:08.821282+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:09.821507+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:10.821726+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:11.821956+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:12.822260+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:13.822385+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:14.822566+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:15.822772+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:16.822913+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:17.823199+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:18.823357+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:19.823507+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:20.823791+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:21.824004+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:22.824231+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:23.824428+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:24.824596+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:25.824774+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:26.824933+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:27.825099+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:28.825225+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:29.825368+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:30.825760+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:31.825916+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:32.826056+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:33.826235+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:34.826418+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:35.826599+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:36.826808+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:37.827001+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:38.827778+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:39.827945+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:40.828610+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:41.829003+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:42.829342+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:43.829491+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:44.830028+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:45.830197+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:46.830508+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:47.830657+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:48.830824+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:49.830992+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:50.831173+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:51.831320+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:52.831597+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:53.832025+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:54.832219+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:55.832367+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:56.832502+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:57.832633+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:58.832760+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:25:59.832895+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:00.833042+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:01.833173+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:02.833313+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:03.833475+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:04.833668+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:05.833814+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:06.834001+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:07.834243+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:08.834383+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:09.834528+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:10.834707+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:11.834894+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:12.835059+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:13.835255+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:14.835482+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:15.835627+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:16.835888+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:17.836100+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:18.836194+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:19.836552+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:20.836843+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:21.837067+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:22.837239+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:23.837399+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:24.837543+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:25.837709+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:26.837864+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:27.838063+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:28.838248+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:29.838415+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:30.838631+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:31.838837+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:32.839022+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:33.839229+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:34.839385+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:35.839545+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:36.839713+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:37.839937+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:38.840128+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:39.840300+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:40.840489+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:41.840625+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:42.840802+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:43.841023+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:44.841211+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:45.841357+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:46.841543+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:47.841734+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:48.841916+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:49.842156+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:50.842361+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:51.842593+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:52.842751+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:53.842991+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:54.843215+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:55.843365+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:56.843519+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:57.843678+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:58.843845+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:26:59.843995+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:00.844220+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:01.844387+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:02.844576+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:03.844710+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:04.844858+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:05.845028+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:06.845232+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:07.845416+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:08.845632+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:09.845786+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:10.845974+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:11.846155+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:12.846371+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:13.846626+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:14.846838+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:15.847011+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:16.847195+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:17.849570+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:18.849994+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:19.850866+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:20.851308+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:21.851612+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:22.852069+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:23.852375+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:24.852621+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:25.853860+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:26.854121+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:27.854320+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:28.854489+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:29.854661+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:30.854861+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:31.855415+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:32.855566+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:33.855766+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:34.855934+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:35.856154+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:36.856286+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:37.856874+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:38.857039+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:39.857344+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:40.857584+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:41.857880+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:42.858271+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:43.858474+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 65536 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:44.858636+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 ms_handle_reset con 0x556d24953400 session 0x556d23683180
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273be000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:45.859071+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:46.859272+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:47.859455+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:48.860041+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:49.860232+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:50.860450+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:51.860619+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:52.860840+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:53.861068+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:54.861243+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:55.861448+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:56.861651+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:57.861827+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:58.861994+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:27:59.862223+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:00.862419+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:01.862646+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:02.862762+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:03.862889+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:04.863023+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:05.863176+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:06.863286+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:07.863399+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:08.863525+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:09.863626+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:10.863804+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:11.863943+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:12.864754+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:13.864880+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:14.865046+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:15.865211+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:16.865392+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:17.865520+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:18.865708+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:19.865874+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:20.866085+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:21.866262+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:22.866438+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:23.866619+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:24.877071+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:25.877194+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:26.877326+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:27.877505+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:28.877645+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:29.877848+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:30.878076+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:31.878236+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:32.878404+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:33.878621+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:34.878769+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:35.879020+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 ms_handle_reset con 0x556d25957000 session 0x556d25eaa540
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273be400
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:36.879207+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:37.879359+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:38.879501+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:39.879642+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:40.879861+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:41.880084+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:42.880260+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:43.880417+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:44.880557+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:45.880720+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:46.880912+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:47.881078+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:48.881267+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:49.881445+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:50.881653+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:51.881795+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:52.881988+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:53.882150+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:54.882330+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:55.882520+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:56.882717+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:57.882907+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:58.883089+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:28:59.883298+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:00.883502+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:01.883709+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:02.883905+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:03.884068+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:04.884278+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:05.884441+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:06.884604+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:07.884755+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:08.885028+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:09.885226+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:10.885539+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:11.885887+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:12.886117+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:13.886339+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:14.886605+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:15.886761+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:16.886897+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:17.887084+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:18.887301+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:19.887488+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:20.887733+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:21.887918+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:22.888086+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:23.888244+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:24.888363+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:25.888542+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:26.888681+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:27.888787+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:28.888899+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:29.888985+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:30.889209+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:31.889359+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:32.889472+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:33.889589+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:34.889668+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:35.889779+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:36.889893+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:37.890055+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:38.890208+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:39.890354+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:40.890505+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:41.890616+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:42.890739+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:43.890908+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:44.891053+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:45.891191+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:46.891415+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:47.891568+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:48.891707+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:49.891821+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:50.892024+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:51.892211+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:52.892296+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:53.892481+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:54.892668+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:55.892834+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:56.893008+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:57.893193+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:58.893400+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:29:59.893585+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:00.893830+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:01.893987+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:02.894159+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:03.894315+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:04.894492+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:05.894665+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:06.894783+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:07.894991+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:08.895239+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:09.895458+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:10.895708+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:11.895906+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:12.896053+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:13.896311+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:14.896492+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:15.896693+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:16.896871+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:17.897020+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:18.897183+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:19.897440+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:20.897709+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:21.897920+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:22.898156+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:23.898394+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:24.898656+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:25.898840+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:26.899034+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:27.899293+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:28.899507+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:29.899670+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:30.899892+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:31.900221+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:32.900423+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:33.900631+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:34.900839+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:35.901208+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:36.901450+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:37.901654+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:38.901869+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:39.902060+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:40.902467+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:41.902638+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:42.902812+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:43.902987+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:44.903123+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:45.903327+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:46.903573+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:47.903744+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:48.903911+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:49.904181+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:50.904429+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:51.904600+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:52.904739+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:53.904942+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:54.905103+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:55.905279+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:56.905439+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:57.905627+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:58.905770+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:30:59.905959+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:00.906160+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:01.906371+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:02.906574+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:03.906728+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:04.906888+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:05.907024+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:06.907257+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:07.907426+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:08.907639+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:09.907824+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:10.908046+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:11.908234+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:12.908406+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:13.908602+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:14.908775+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:15.908987+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:16.909166+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:17.909296+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:18.909439+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:19.909635+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:20.909872+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:21.910022+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:22.910164+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:23.910312+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:24.910469+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:25.910610+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 811008 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:26.910795+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:27.911008+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:28.911165+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:29.911354+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:30.911600+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:31.911752+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:32.911909+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:33.912097+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:34.912278+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:35.912435+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:36.912613+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:37.912799+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:38.912983+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:39.913220+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:40.913569+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:41.913744+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:42.913897+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:43.914074+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:44.914280+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:45.914492+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:46.914669+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:47.914837+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:48.915000+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:49.915165+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:50.915466+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:51.915723+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:52.915923+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:53.916082+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:54.916228+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:55.916467+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:56.916737+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:57.916895+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:58.917080+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:31:59.917257+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:00.917441+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:01.917631+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:02.917743+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:03.917919+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:04.918076+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:05.918196+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499207 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:06.918328+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:07.918419+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:08.918628+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 46 handle_osd_map epochs [47,47], i have 46, src has [1,47]
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 936.087219238s of 936.090454102s, submitted: 2
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:09.918735+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 47 heartbeat osd_stat(store_statfs(0x4fe158000/0x0/0x4ffc00000, data 0x2e252/0x74000, compress 0x0/0x0/0x0, omap 0x5bc1, meta 0x1a2a43f), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 47 handle_osd_map epochs [48,48], i have 47, src has [1,48]
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:10.918860+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:45 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 507716 data_alloc: 218103808 data_used: 934
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 884736 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 48 handle_osd_map epochs [49,49], i have 48, src has [1,49]
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:11.918981+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 49 ms_handle_reset con 0x556d25957000 session 0x556d2701a000
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 811008 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:12.919193+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 49 heartbeat osd_stat(store_statfs(0x4fe14d000/0x0/0x4ffc00000, data 0x32429/0x7d000, compress 0x0/0x0/0x0, omap 0x6935, meta 0x1a296cb), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273be800
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 49 heartbeat osd_stat(store_statfs(0x4fe14d000/0x0/0x4ffc00000, data 0x32429/0x7d000, compress 0x0/0x0/0x0, omap 0x6935, meta 0x1a296cb), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:13.919328+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 49 handle_osd_map epochs [50,50], i have 49, src has [1,50]
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 50 ms_handle_reset con 0x556d273be800 session 0x556d26aef500
Jan 29 09:44:45 compute-0 ceph-osd[86001]: osd.0 50 heartbeat osd_stat(store_statfs(0x4fe14c000/0x0/0x4ffc00000, data 0x3244c/0x7e000, compress 0x0/0x0/0x0, omap 0x6935, meta 0x1a296cb), peers [1,2] op hist [])
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:14.919518+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 524288 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:45 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:15.919670+0000)
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:44:45 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4304 writes, 19K keys, 4304 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4304 writes, 400 syncs, 10.76 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 63 writes, 170 keys, 63 commit groups, 1.0 writes per commit group, ingest: 0.17 MB, 0.00 MB/s
                                           Interval WAL: 63 writes, 30 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556d2365d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 516657 data_alloc: 218103808 data_used: 934
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 491520 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:16.919894+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 50 handle_osd_map epochs [51,51], i have 50, src has [1,51]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 50 handle_osd_map epochs [50,51], i have 51, src has [1,51]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:17.920093+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:18.920265+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe145000/0x0/0x4ffc00000, data 0x34f08/0x85000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:19.920408+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:20.920590+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 518517 data_alloc: 218103808 data_used: 934
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:21.920763+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe145000/0x0/0x4ffc00000, data 0x34f08/0x85000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:22.920886+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:23.921056+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:24.921218+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe145000/0x0/0x4ffc00000, data 0x34f08/0x85000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:25.921339+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 518517 data_alloc: 218103808 data_used: 934
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:26.921470+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:27.921621+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:28.921815+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:29.922015+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe145000/0x0/0x4ffc00000, data 0x34f08/0x85000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe145000/0x0/0x4ffc00000, data 0x34f08/0x85000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:30.922209+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 518517 data_alloc: 218103808 data_used: 934
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:31.922341+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe145000/0x0/0x4ffc00000, data 0x34f08/0x85000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:32.922476+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:33.922633+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:34.922810+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:35.923021+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe145000/0x0/0x4ffc00000, data 0x34f08/0x85000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 518517 data_alloc: 218103808 data_used: 934
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:36.923231+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:37.923440+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe145000/0x0/0x4ffc00000, data 0x34f08/0x85000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:38.923612+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:39.923801+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:40.923980+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe145000/0x0/0x4ffc00000, data 0x34f08/0x85000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 518517 data_alloc: 218103808 data_used: 934
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:41.924097+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bec00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.821676254s of 32.106227875s, submitted: 42
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe145000/0x0/0x4ffc00000, data 0x34f08/0x85000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bf000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 147456 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 51 handle_osd_map epochs [52,52], i have 51, src has [1,52]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 51 handle_osd_map epochs [51,52], i have 52, src has [1,52]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:42.924253+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 52 ms_handle_reset con 0x556d273bec00 session 0x556d2701a8c0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 52 ms_handle_reset con 0x556d273bf000 session 0x556d2714bc00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 52 heartbeat osd_stat(store_statfs(0x4fe144000/0x0/0x4ffc00000, data 0x3533b/0x88000, compress 0x0/0x0/0x0, omap 0x6980, meta 0x1a29680), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bf400
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 52 ms_handle_reset con 0x556d273bf400 session 0x556d2747c540
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 1179648 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 52 ms_handle_reset con 0x556d25957000 session 0x556d2747ca80
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:43.924382+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273be800
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 1130496 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 52 handle_osd_map epochs [52,53], i have 53, src has [1,53]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:44.924500+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 53 ms_handle_reset con 0x556d273be800 session 0x556d2747cfc0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bf000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 53 ms_handle_reset con 0x556d273bf000 session 0x556d271cd880
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749e400
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 53 ms_handle_reset con 0x556d2749e400 session 0x556d26aee1c0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 2269184 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:45.924633+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749e800
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749ec00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 578074 data_alloc: 218103808 data_used: 1958
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 10272768 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:46.924777+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 53 handle_osd_map epochs [54,54], i have 53, src has [1,54]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 54 ms_handle_reset con 0x556d2749e800 session 0x556d271cce00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 18685952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:47.924941+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 54 heartbeat osd_stat(store_statfs(0x4fd134000/0x0/0x4ffc00000, data 0x1039529/0x1093000, compress 0x0/0x0/0x0, omap 0x7edb, meta 0x1a28125), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 54 heartbeat osd_stat(store_statfs(0x4fc939000/0x0/0x4ffc00000, data 0x1839529/0x1893000, compress 0x0/0x0/0x0, omap 0x7edb, meta 0x1a28125), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 18685952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:48.925064+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 54 handle_osd_map epochs [55,55], i have 54, src has [1,55]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 55 ms_handle_reset con 0x556d2749ec00 session 0x556d271ae380
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 55 ms_handle_reset con 0x556d25957000 session 0x556d25eaac40
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 18620416 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:49.925226+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749f000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 18620416 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:50.925560+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 55 heartbeat osd_stat(store_statfs(0x4fc936000/0x0/0x4ffc00000, data 0x183a727/0x1893000, compress 0x0/0x0/0x0, omap 0x82b9, meta 0x1a27d47), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 56 ms_handle_reset con 0x556d2749f000 session 0x556d26fde380
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749f400
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 546369 data_alloc: 218103808 data_used: 5011
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 18579456 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:51.925726+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 56 handle_osd_map epochs [57,57], i have 56, src has [1,57]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.260757446s of 10.046355247s, submitted: 121
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 57 ms_handle_reset con 0x556d2749f400 session 0x556d23683180
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68419584 unmapped: 18530304 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:52.925954+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bf400
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 58 heartbeat osd_stat(store_statfs(0x4fe130000/0x0/0x4ffc00000, data 0x3d346/0x97000, compress 0x0/0x0/0x0, omap 0x8acf, meta 0x1a27531), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 18391040 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:53.926085+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 59 ms_handle_reset con 0x556d273bf400 session 0x556d2701b500
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:54.926253+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 18464768 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 59 handle_osd_map epochs [59,60], i have 59, src has [1,60]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 60 ms_handle_reset con 0x556d25957000 session 0x556d271cda40
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:55.926381+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 18456576 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749ec00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 60 handle_osd_map epochs [61,61], i have 60, src has [1,61]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 61 ms_handle_reset con 0x556d2749ec00 session 0x556d252a7c00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 563435 data_alloc: 218103808 data_used: 9056
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:56.926530+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 18440192 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 61 heartbeat osd_stat(store_statfs(0x4fe126000/0x0/0x4ffc00000, data 0x4144c/0xa0000, compress 0x0/0x0/0x0, omap 0x9421, meta 0x1a26bdf), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:57.926677+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 18399232 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 61 handle_osd_map epochs [62,62], i have 61, src has [1,62]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749f000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:58.926763+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 18333696 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 62 heartbeat osd_stat(store_statfs(0x4fe11b000/0x0/0x4ffc00000, data 0x43f54/0xa7000, compress 0x0/0x0/0x0, omap 0x9a46, meta 0x1a265ba), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:32:59.926913+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 18333696 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 62 ms_handle_reset con 0x556d2749f000 session 0x556d2716e700
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 62 handle_osd_map epochs [63,63], i have 62, src has [1,63]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749f400
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:00.927063+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 18317312 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 64 ms_handle_reset con 0x556d2749f400 session 0x556d26c35340
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x461e3/0xab000, compress 0x0/0x0/0x0, omap 0x9cb3, meta 0x1a2634d), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bec00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe118000/0x0/0x4ffc00000, data 0x483c5/0xb0000, compress 0x0/0x0/0x0, omap 0x9f24, meta 0x1a260dc), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 580573 data_alloc: 218103808 data_used: 9056
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:01.927266+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 17039360 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 65 ms_handle_reset con 0x556d273bec00 session 0x556d2701ae00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 65 ms_handle_reset con 0x556d25957000 session 0x556d25071c00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749ec00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749f000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.924083710s of 10.564991951s, submitted: 87
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:02.927508+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 17047552 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 66 ms_handle_reset con 0x556d2749f000 session 0x556d271aea80
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 66 ms_handle_reset con 0x556d2749ec00 session 0x556d250708c0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:03.927840+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749f400
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 17022976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 67 ms_handle_reset con 0x556d2749f400 session 0x556d26c34540
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749e800
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:04.928006+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 15753216 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 67 ms_handle_reset con 0x556d2749e800 session 0x556d25071340
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:05.928283+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 15794176 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 592591 data_alloc: 218103808 data_used: 9091
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:06.928452+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 15794176 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe109000/0x0/0x4ffc00000, data 0x4deaa/0xbf000, compress 0x0/0x0/0x0, omap 0xacff, meta 0x1a25301), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:07.928623+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 15794176 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:08.929603+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 15892480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 70 ms_handle_reset con 0x556d25957000 session 0x556d25eab340
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749fc00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:09.930253+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 15859712 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 70 handle_osd_map epochs [70,71], i have 70, src has [1,71]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 71 ms_handle_reset con 0x556d2749fc00 session 0x556d251261c0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:10.931775+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 15826944 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749e400
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:11.931940+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 600150 data_alloc: 218103808 data_used: 9091
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 15826944 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 71 handle_osd_map epochs [71,72], i have 71, src has [1,72]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 72 ms_handle_reset con 0x556d2749e400 session 0x556d25eaba40
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:12.932273+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 15802368 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 72 heartbeat osd_stat(store_statfs(0x4fe100000/0x0/0x4ffc00000, data 0x535ad/0xca000, compress 0x0/0x0/0x0, omap 0xbbd2, meta 0x1a2442e), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749e000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.267245293s of 10.588221550s, submitted: 77
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:13.932930+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 15745024 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 73 ms_handle_reset con 0x556d2749e000 session 0x556d24ef16c0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749f800
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:14.933227+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 15392768 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 74 heartbeat osd_stat(store_statfs(0x4fe0fc000/0x0/0x4ffc00000, data 0x54b9b/0xcb000, compress 0x0/0x0/0x0, omap 0xc13f, meta 0x1a23ec1), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 74 ms_handle_reset con 0x556d2749f800 session 0x556d25070e00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:15.933636+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 15343616 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 75 heartbeat osd_stat(store_statfs(0x4fe0fe000/0x0/0x4ffc00000, data 0x555c3/0xcc000, compress 0x0/0x0/0x0, omap 0xc507, meta 0x1a23af9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 75 ms_handle_reset con 0x556d25957000 session 0x556d25d48c40
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:16.933818+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 607715 data_alloc: 218103808 data_used: 13152
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 15319040 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749e000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 76 ms_handle_reset con 0x556d2749e000 session 0x556d2716fdc0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749e400
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749fc00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:17.934029+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 15220736 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:18.934280+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 15155200 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:19.934665+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 15155200 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 77 heartbeat osd_stat(store_statfs(0x4fe0f0000/0x0/0x4ffc00000, data 0x58bb8/0xd4000, compress 0x0/0x0/0x0, omap 0xd09f, meta 0x1a22f61), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749f400
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:20.934967+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 14983168 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 78 ms_handle_reset con 0x556d2749f400 session 0x556d25071500
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:21.935235+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 622898 data_alloc: 218103808 data_used: 13152
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 14983168 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:22.935449+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 14983168 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749f000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 78 handle_osd_map epochs [80,80], i have 78, src has [1,80]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 78 handle_osd_map epochs [79,80], i have 78, src has [1,80]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.230520248s of 10.425002098s, submitted: 107
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 80 ms_handle_reset con 0x556d2749f000 session 0x556d24ef0c40
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:23.935690+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749ec00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 14548992 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 80 ms_handle_reset con 0x556d2749ec00 session 0x556d2716f880
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2749ec00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 80 handle_osd_map epochs [80,81], i have 80, src has [1,81]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 81 ms_handle_reset con 0x556d2749ec00 session 0x556d27029dc0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:24.935986+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 14508032 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 82 heartbeat osd_stat(store_statfs(0x4fe0ea000/0x0/0x4ffc00000, data 0x5e2cb/0xe0000, compress 0x0/0x0/0x0, omap 0xda52, meta 0x1a225ae), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 82 ms_handle_reset con 0x556d25957000 session 0x556d2714ce00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:25.936225+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 14467072 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957800
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d25957800 session 0x556d27193c00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:26.936436+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 639125 data_alloc: 218103808 data_used: 13152
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 14368768 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:27.936632+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 14368768 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:28.936789+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 14368768 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:29.936904+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 83 heartbeat osd_stat(store_statfs(0x4fe0df000/0x0/0x4ffc00000, data 0x60f1d/0xe7000, compress 0x0/0x0/0x0, omap 0xe432, meta 0x1a21bce), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 14368768 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25e6a400
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d25e6a400 session 0x556d25eaaa80
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d24952800
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d24952800 session 0x556d25126700
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d25957000 session 0x556d27007500
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 83 heartbeat osd_stat(store_statfs(0x4fe0df000/0x0/0x4ffc00000, data 0x60f1d/0xe7000, compress 0x0/0x0/0x0, omap 0xe432, meta 0x1a21bce), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread fragmentation_score=0.000120 took=0.000025s
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957800
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d25957800 session 0x556d2528b880
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2761c000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2761d400
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:30.937045+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d2761d400 session 0x556d25071340
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d2761c000 session 0x556d27137500
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25e6ac00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d25e6ac00 session 0x556d271ae380
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25e6ac00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d25e6ac00 session 0x556d2716f880
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25957000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d25957000 session 0x556d25eaaa80
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273be800
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d273be800 session 0x556d24ef0a80
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 14229504 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bf000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d273bf000 session 0x556d271af6c0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:31.937189+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 639081 data_alloc: 218103808 data_used: 13152
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 14213120 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bfc00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 83 ms_handle_reset con 0x556d273bfc00 session 0x556d25070a80
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bfc00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 84 ms_handle_reset con 0x556d273bfc00 session 0x556d25070000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bf800
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 84 ms_handle_reset con 0x556d273bf800 session 0x556d25eaac40
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:32.937325+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 14163968 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:33.937436+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 84 heartbeat osd_stat(store_statfs(0x4fe0df000/0x0/0x4ffc00000, data 0x62415/0xeb000, compress 0x0/0x0/0x0, omap 0xec3e, meta 0x1a213c2), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 14163968 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bec00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25148800
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.474985123s of 10.634794235s, submitted: 90
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:34.937585+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 14123008 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:35.937809+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 14123008 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:36.938002+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 644262 data_alloc: 218103808 data_used: 13152
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 14123008 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25148000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 84 ms_handle_reset con 0x556d25148000 session 0x556d2528ae00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2761d400
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:37.938230+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 14065664 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 85 ms_handle_reset con 0x556d2761d400 session 0x556d270061c0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2761c000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2761c400
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 85 ms_handle_reset con 0x556d2761c400 session 0x556d26aeee00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 85 ms_handle_reset con 0x556d2761c000 session 0x556d270068c0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2761dc00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 85 ms_handle_reset con 0x556d2761dc00 session 0x556d2714a700
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2761d800
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:38.938458+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 13778944 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 86 ms_handle_reset con 0x556d2761d800 session 0x556d26fdf340
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 86 heartbeat osd_stat(store_statfs(0x4fcf3c000/0x0/0x4ffc00000, data 0x639e2/0xee000, compress 0x0/0x0/0x0, omap 0xeec4, meta 0x2bc113c), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:39.938699+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 13746176 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 86 heartbeat osd_stat(store_statfs(0x4fcf3c000/0x0/0x4ffc00000, data 0x639e2/0xee000, compress 0x0/0x0/0x0, omap 0xeec4, meta 0x2bc113c), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:40.938961+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 13746176 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:41.939602+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 87 heartbeat osd_stat(store_statfs(0x4fcf39000/0x0/0x4ffc00000, data 0x65003/0xf1000, compress 0x0/0x0/0x0, omap 0xf14e, meta 0x2bc0eb2), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 653026 data_alloc: 218103808 data_used: 13168
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 13721600 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2761c800
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 87 ms_handle_reset con 0x556d2761c800 session 0x556d271afa40
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:42.939963+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2761d400
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 13557760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:43.940113+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 88 ms_handle_reset con 0x556d2761d400 session 0x556d271ae380
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 13533184 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:44.941071+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 13533184 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 88 ms_handle_reset con 0x556d273bec00 session 0x556d2528bc00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.035446167s of 11.152009964s, submitted: 83
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 88 ms_handle_reset con 0x556d25148800 session 0x556d23683180
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d2761c000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 88 handle_osd_map epochs [88,89], i have 89, src has [1,89]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:45.941191+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 89 ms_handle_reset con 0x556d2761c000 session 0x556d271cd500
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 13418496 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:46.941500+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 656275 data_alloc: 218103808 data_used: 14350
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 13418496 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:47.941986+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 89 heartbeat osd_stat(store_statfs(0x4fcf32000/0x0/0x4ffc00000, data 0x69246/0xf8000, compress 0x0/0x0/0x0, omap 0x100fa, meta 0x2bbff06), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 13418496 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 90 ms_handle_reset con 0x556d2749e400 session 0x556d26c34380
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 90 ms_handle_reset con 0x556d2749fc00 session 0x556d25d481c0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:48.942431+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d25148800
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 90 ms_handle_reset con 0x556d25148800 session 0x556d25d496c0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 13500416 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 90 heartbeat osd_stat(store_statfs(0x4fcf31000/0x0/0x4ffc00000, data 0x6a74a/0xfb000, compress 0x0/0x0/0x0, omap 0x106a5, meta 0x2bbf95b), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:49.942800+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 13500416 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 90 heartbeat osd_stat(store_statfs(0x4fcf31000/0x0/0x4ffc00000, data 0x6a74a/0xfb000, compress 0x0/0x0/0x0, omap 0x106a5, meta 0x2bbf95b), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:50.943398+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bec00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 90 ms_handle_reset con 0x556d273bec00 session 0x556d26fdfa40
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bf000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 13500416 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:51.943715+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 91 ms_handle_reset con 0x556d273bf000 session 0x556d2701ba40
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 661855 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 13475840 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:52.944030+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 92 heartbeat osd_stat(store_statfs(0x4fcf28000/0x0/0x4ffc00000, data 0x6d224/0x100000, compress 0x0/0x0/0x0, omap 0x10e51, meta 0x2bbf1af), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:53.944287+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 92 heartbeat osd_stat(store_statfs(0x4fcf28000/0x0/0x4ffc00000, data 0x6d224/0x100000, compress 0x0/0x0/0x0, omap 0x10e51, meta 0x2bbf1af), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:54.944762+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:55.944924+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:56.945184+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 665091 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 92 handle_osd_map epochs [92,93], i have 93, src has [1,93]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.064319611s of 12.219558716s, submitted: 94
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:57.945580+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:58.945730+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:33:59.945997+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:00.946300+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:01.946540+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:02.946770+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:03.946970+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _renew_subs
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:04.947187+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:05.947321+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:06.947461+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:07.947674+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:08.947934+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:09.948216+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:10.948433+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:11.948645+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:12.948808+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:13.948994+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:14.949156+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:15.949354+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:16.949537+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:17.949705+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:18.949873+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:19.950069+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:20.950323+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:21.950467+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:22.950870+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:23.951085+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:24.951204+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:25.951360+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:26.951526+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:27.951844+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:28.951983+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:29.952163+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:30.952317+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:31.952453+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:32.952877+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:33.953062+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:34.953226+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:35.953358+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:36.953524+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:37.953659+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:38.953795+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:39.954023+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:40.954192+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:41.954342+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:42.954516+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:43.954668+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:44.954795+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:45.954963+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:46.955078+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:47.955237+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:48.955480+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:49.955638+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:50.955820+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:51.955980+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:52.956162+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:53.956279+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:54.956545+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:55.956691+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:56.956849+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:57.956961+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:58.957197+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:34:59.957383+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:00.957553+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:01.957692+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:02.957829+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:03.957996+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:04.958145+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:05.958309+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:06.958434+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:07.958614+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:08.958748+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:09.958858+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:10.959016+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:11.959177+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:12.959355+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:13.959477+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:14.959589+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 13467648 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:15.959709+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 13393920 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:16.959863+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'config diff' '{prefix=config diff}'
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'config show' '{prefix=config show}'
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'counter dump' '{prefix=counter dump}'
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'counter schema' '{prefix=counter schema}'
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 12787712 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:17.960004+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 12992512 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:18.960151+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'log dump' '{prefix=log dump}'
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 1949696 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:19.960292+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'perf dump' '{prefix=perf dump}'
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'perf schema' '{prefix=perf schema}'
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 12918784 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:20.960449+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 12918784 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:21.960579+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 12918784 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:22.960707+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 12918784 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:23.960876+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 12918784 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:24.961000+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 12918784 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:25.961209+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 12918784 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:26.961344+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:27.961487+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:28.961630+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:29.961753+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:30.961928+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:31.962101+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:32.962287+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:33.962427+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:34.962603+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:35.962777+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:36.962927+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:37.963092+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:38.963266+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:39.963425+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:40.963629+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:41.963805+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:42.964244+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:43.964390+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:44.964534+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:45.964678+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:46.964900+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:47.965039+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:48.965199+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:49.965374+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:50.965591+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:51.965805+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:52.965982+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:53.966202+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:54.966343+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:55.966540+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:56.966694+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:57.966831+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:58.966987+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:35:59.967124+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:00.967346+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:01.967498+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:02.967656+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:03.967839+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:04.967991+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:05.968377+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:06.968542+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:07.968713+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:08.968854+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:09.968996+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:10.969185+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:11.969402+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:12.969564+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:13.969698+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:14.969830+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:15.970044+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:16.970193+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:17.970389+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:18.970514+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:19.970674+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:20.970816+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:21.970981+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:22.971117+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:23.971305+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:24.971463+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:25.971641+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:26.971806+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:27.972011+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:28.972228+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:29.972394+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:30.972612+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:31.972842+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:32.973017+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:33.973206+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:34.973312+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:35.973396+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:36.973495+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:37.973627+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:38.973786+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:39.973996+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:40.974154+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:41.974291+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:42.974430+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:43.974582+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:44.974678+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:45.974856+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:46.975019+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:47.975178+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:48.975300+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:49.975422+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:50.975579+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:51.975719+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:52.975860+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:53.976017+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:54.976222+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:55.976406+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:56.976548+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:57.976677+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:58.976890+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:36:59.977052+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:00.977266+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:01.977500+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:02.977738+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:03.977910+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:04.978085+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:05.978225+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:06.978361+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:07.978498+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:08.978631+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:09.978751+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:10.978917+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:11.979090+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:12.979261+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:13.979355+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:14.979514+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:15.979857+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:16.980016+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:17.980232+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 12926976 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:18.980395+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 12918784 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:19.980547+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 12918784 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:20.980733+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 12918784 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:21.980965+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 12918784 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:22.981233+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 12918784 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:23.981357+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 12918784 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:24.981526+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 12918784 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:25.981699+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 12918784 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:26.981888+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 12918784 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:27.982052+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 12918784 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:28.982187+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 12918784 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:29.982450+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 12918784 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:30.982634+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 12918784 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:31.982776+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 12918784 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:32.982913+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:33.983067+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:34.983216+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:35.983803+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:36.984019+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:37.984224+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:38.984394+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:39.984551+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:40.984767+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:41.984919+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 12910592 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:42.985113+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 12902400 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:43.985331+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 12902400 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:44.985504+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 12902400 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:45.985664+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 12902400 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:46.985780+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 12902400 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:47.985954+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 12902400 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:48.986105+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 12902400 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:49.986244+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 12894208 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:50.986405+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 12894208 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:51.986524+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 12894208 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:52.986646+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 12894208 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:53.986798+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 12894208 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:54.986930+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:55.987102+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:56.987258+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:57.987374+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:58.987589+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:37:59.987838+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:00.988035+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:01.988243+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:02.988516+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:03.988778+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:04.989061+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:05.989269+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:06.989500+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:07.989740+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:08.989947+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:09.990191+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:10.990455+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:11.990671+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:12.990861+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:13.991063+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:14.991273+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:15.991436+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:16.991576+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:17.991719+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:18.991918+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:19.992218+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:20.992426+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:21.992570+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:22.992726+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:23.992870+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:24.993052+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:25.993192+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:26.993356+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:27.993572+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:28.993708+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:29.994432+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:30.994644+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:31.994781+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:32.994922+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:33.995024+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:34.995266+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:35.995461+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:36.995669+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 12886016 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:37.995812+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 12877824 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:38.995946+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 12877824 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:39.996081+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 12877824 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:40.996223+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 12877824 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:41.996337+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 12877824 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:42.996465+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 12877824 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:43.996651+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 12869632 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:44.996798+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 12869632 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:45.996958+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 12869632 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:46.997186+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 12869632 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:47.997364+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 12869632 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:48.997502+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 12869632 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:49.997681+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 12869632 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:50.997948+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:51.998176+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:52.998318+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:53.998476+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:54.998628+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:55.998751+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:56.998921+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:57.999072+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:58.999227+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:38:59.999365+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:00.999525+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15132 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:01.999660+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:02.999867+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:04.000087+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:05.000186+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:06.000324+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:07.000517+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:08.000681+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:09.000823+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:10.000977+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:11.001127+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:12.001321+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:13.001454+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:14.001611+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:15.001765+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:16.001954+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:17.002126+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:18.002317+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:19.002473+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:20.002639+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:21.002796+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:22.002989+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:23.003128+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:24.003316+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:25.003489+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 12861440 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:26.003648+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:27.003810+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:28.003973+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:29.004049+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:30.004183+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:31.004328+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:32.004498+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:33.004712+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:34.004934+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:35.005118+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:36.005412+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:37.005733+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:38.005990+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:39.006412+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:40.006915+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:41.007322+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:42.007500+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:43.007717+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:44.007890+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:45.008073+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:46.008247+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:47.008453+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:48.008601+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:49.008743+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:50.008902+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:51.009116+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:52.009292+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:53.009421+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:54.009577+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:55.009769+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:56.009908+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:57.010070+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 12853248 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:58.010260+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:39:59.010423+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:00.010549+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:01.010722+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:02.010947+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:03.011084+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:04.011210+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:05.011339+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:06.011602+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:07.011720+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:08.011929+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:09.012074+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:10.012212+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:11.012449+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:12.012650+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:13.012871+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:14.013068+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:15.013246+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:16.013473+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:17.013646+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:18.013831+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:19.013973+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:20.014177+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:21.014482+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:22.014628+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:23.014728+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:24.014869+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:25.015071+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:26.015216+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:27.015349+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:28.015527+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:29.015654+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 12845056 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:30.015815+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 12836864 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:31.016019+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 12836864 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:32.016196+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 12836864 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:33.016361+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 12836864 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:34.016483+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 12836864 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:35.016606+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 12836864 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:36.016738+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 12836864 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:37.016866+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 12836864 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:38.016987+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 12836864 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:39.017108+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 12836864 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:40.017272+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 12836864 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:41.017488+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 12828672 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:42.017657+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 12828672 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:43.017888+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 12828672 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:44.018023+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 12828672 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:45.018184+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 12828672 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:46.018331+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 12828672 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:47.018553+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 12828672 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:48.018692+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 12828672 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:49.018969+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 12828672 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:50.019093+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 12828672 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:51.019262+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 12828672 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:52.019375+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 12828672 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:53.019517+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 12828672 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:54.019715+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 12828672 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:55.019878+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:56.020102+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:57.020233+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:58.020443+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:40:59.020599+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:00.020751+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:01.020940+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:02.021055+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:03.021179+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:04.021317+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:05.021465+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:06.021653+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:07.021789+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:08.021980+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:09.022118+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:10.022274+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:11.022459+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:12.022601+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:13.022712+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:14.022954+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:15.023101+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:16.023260+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:17.023433+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:18.023653+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:19.023839+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:20.024054+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:21.024327+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:22.024489+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:23.024634+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:24.024765+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:25.024943+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:26.025185+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:27.025312+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:28.025490+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:29.025650+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:30.025868+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:31.026127+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:32.026310+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:33.026495+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:34.026667+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:35.026838+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:36.027058+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:37.027189+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:38.027356+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:39.027543+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:40.027756+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:41.027999+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:42.028215+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 12820480 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:43.028410+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:44.028613+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:45.028838+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:46.029045+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:47.029277+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:48.029445+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:49.029615+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:50.029749+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:51.029943+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:52.030185+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:53.030355+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:54.030537+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:55.030691+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:56.030898+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:57.031092+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:58.031258+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:41:59.031391+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:00.031586+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:01.031897+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:02.032107+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:03.032286+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:04.032487+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:05.032740+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:06.032922+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:07.033087+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:08.033325+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:09.033493+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:10.033681+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:11.033943+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:12.034247+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:13.034398+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:14.034532+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:15.034761+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:16.034917+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 5493 writes, 23K keys, 5493 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5493 writes, 912 syncs, 6.02 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1189 writes, 3320 keys, 1189 commit groups, 1.0 writes per commit group, ingest: 1.85 MB, 0.00 MB/s
                                           Interval WAL: 1189 writes, 512 syncs, 2.32 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:17.035050+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:18.035258+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:19.035452+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:20.035606+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:21.035825+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:22.036116+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:23.036440+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:24.036719+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:25.037772+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 12812288 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:26.037926+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 12804096 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:27.038124+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 12804096 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:28.038484+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 12804096 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:29.038751+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 12804096 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:30.039008+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 12804096 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:31.039346+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 12804096 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:32.039525+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 12804096 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:33.039775+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 12804096 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:34.040057+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 12804096 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:35.040300+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 12804096 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:36.040507+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 12804096 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:37.040736+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 12804096 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:38.041197+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 12804096 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:39.041472+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: mgrc ms_handle_reset ms_handle_reset con 0x556d25308000
Jan 29 09:44:46 compute-0 ceph-osd[86001]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1795618739
Jan 29 09:44:46 compute-0 ceph-osd[86001]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1795618739,v1:192.168.122.100:6801/1795618739]
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: get_auth_request con 0x556d2749fc00 auth_method 0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: mgrc handle_mgr_configure stats_period=5
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:40.041744+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:41.042190+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:42.042387+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:43.042684+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:44.042938+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:45.043198+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 ms_handle_reset con 0x556d273be000 session 0x556d2701afc0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bfc00
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:46.043439+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:47.043614+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:48.043788+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:49.044063+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:50.044307+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:51.044643+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:52.044876+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:53.045179+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:54.045457+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:55.045623+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:56.045823+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:57.046033+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:58.046253+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:42:59.046419+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:00.046609+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:01.046855+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:02.047029+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:03.047314+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:04.047584+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:05.047770+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:06.048000+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:07.048180+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:08.048364+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:09.048580+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:10.048702+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:11.048917+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:12.049091+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:13.049266+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:14.049464+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:15.049629+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:16.049803+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:17.049992+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:18.050112+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:19.050240+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:20.050407+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:21.050567+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:22.050685+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:23.050830+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:24.050989+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:25.051153+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:26.051299+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:27.051438+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:28.051605+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:29.051737+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:30.051879+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:31.052051+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:32.052182+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:33.052285+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 12541952 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:34.052504+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:35.052665+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:36.052887+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 ms_handle_reset con 0x556d273be400 session 0x556d27029a40
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: handle_auth_request added challenge on 0x556d273bf800
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:37.053059+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:38.053215+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:39.053338+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:40.053468+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:41.053646+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:42.053807+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:43.053942+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:44.054223+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:45.054354+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:46.054526+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:47.054687+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:48.054854+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:49.054977+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:50.055103+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:51.055308+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:52.055467+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:53.055627+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:54.055791+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:55.055903+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:56.056056+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:57.056242+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:58.056443+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:43:59.056688+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:00.056834+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:01.056963+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:02.057070+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:03.057228+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:04.057383+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:05.057530+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:06.061157+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:07.061324+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:08.061471+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:09.061611+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:10.061797+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:11.061981+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:12.062104+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 12533760 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x6e6d4/0x103000, compress 0x0/0x0/0x0, omap 0x11057, meta 0x2bbefa9), peers [1,2] op hist [])
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:13.062228+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'config diff' '{prefix=config diff}'
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'config show' '{prefix=config show}'
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'counter dump' '{prefix=counter dump}'
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'counter schema' '{prefix=counter schema}'
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 12386304 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:14.062396+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 09:44:46 compute-0 ceph-osd[86001]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 09:44:46 compute-0 ceph-osd[86001]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667399 data_alloc: 218103808 data_used: 14315
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 12386304 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: tick
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_tickets
Jan 29 09:44:46 compute-0 ceph-osd[86001]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-29T09:44:15.062595+0000)
Jan 29 09:44:46 compute-0 ceph-osd[86001]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 12320768 heap: 86949888 old mem: 2845415832 new mem: 2845415832
Jan 29 09:44:46 compute-0 ceph-osd[86001]: do_command 'log dump' '{prefix=log dump}'
Jan 29 09:44:46 compute-0 ceph-mon[75183]: pgmap v1068: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:46 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1682481664' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 29 09:44:46 compute-0 ceph-mon[75183]: from='client.15126 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:46 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1052824979' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 29 09:44:46 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15134 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:46 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15136 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:46 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15138 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1069: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:47 compute-0 ceph-mon[75183]: from='client.15130 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:47 compute-0 ceph-mon[75183]: from='client.15132 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:47 compute-0 ceph-mon[75183]: from='client.15134 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:47 compute-0 ceph-mon[75183]: from='client.15136 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:47 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15142 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:47 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 29 09:44:47 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3702759203' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 29 09:44:47 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15146 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:48 compute-0 ceph-mon[75183]: from='client.15138 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:48 compute-0 ceph-mon[75183]: pgmap v1069: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:48 compute-0 ceph-mon[75183]: from='client.15142 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:48 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3702759203' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 29 09:44:48 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15150 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Jan 29 09:44:48 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1629524618' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 29 09:44:48 compute-0 systemd[1]: Starting Hostname Service...
Jan 29 09:44:48 compute-0 systemd[1]: Started Hostname Service.
Jan 29 09:44:48 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 29 09:44:48 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/404662610' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 29 09:44:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1070: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 09:44:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 29 09:44:49 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/181787395' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 29 09:44:49 compute-0 ceph-mon[75183]: from='client.15146 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:49 compute-0 ceph-mon[75183]: from='client.15150 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 09:44:49 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1629524618' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 29 09:44:49 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/404662610' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 29 09:44:49 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 29 09:44:49 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 29 09:44:49 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 29 09:44:49 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 29 09:44:49 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 29 09:44:49 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/118668746' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 29 09:44:50 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15166 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:50 compute-0 ceph-mon[75183]: pgmap v1070: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:50 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/181787395' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 29 09:44:50 compute-0 ceph-mon[75183]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 29 09:44:50 compute-0 ceph-mon[75183]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 29 09:44:50 compute-0 ceph-mon[75183]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 29 09:44:50 compute-0 ceph-mon[75183]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 29 09:44:50 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/118668746' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 29 09:44:50 compute-0 nova_compute[236255]: 2026-01-29 09:44:50.574 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:44:50 compute-0 nova_compute[236255]: 2026-01-29 09:44:50.575 236262 DEBUG oslo_service.periodic_task [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 29 09:44:50 compute-0 nova_compute[236255]: 2026-01-29 09:44:50.576 236262 DEBUG nova.compute.manager [None req-66672889-e5e9-4cce-9864-bef981414610 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 29 09:44:50 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 29 09:44:50 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1154264162' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 29 09:44:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1071: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Jan 29 09:44:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/553970467' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 29 09:44:51 compute-0 ceph-mon[75183]: from='client.15166 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:51 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1154264162' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 29 09:44:51 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/553970467' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 29 09:44:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 09:44:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2487434617' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:44:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 09:44:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2487434617' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:44:51 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 29 09:44:51 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1154904063' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 29 09:44:52 compute-0 ceph-mon[75183]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 29 09:44:52 compute-0 ceph-mon[75183]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3150080382' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 29 09:44:52 compute-0 ceph-mon[75183]: pgmap v1071: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 29 09:44:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/2487434617' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 09:44:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.10:0/2487434617' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 09:44:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/1154904063' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 29 09:44:52 compute-0 ceph-mon[75183]: from='client.? 192.168.122.100:0/3150080382' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 29 09:44:52 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15180 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 09:44:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1072: 193 pgs: 193 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
